Dec 11 13:53:05 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 13:53:05 crc restorecon[4709]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:05 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:53:06 crc restorecon[4709]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 11 13:53:06 crc kubenswrapper[4924]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:53:06 crc kubenswrapper[4924]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 13:53:06 crc kubenswrapper[4924]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:53:06 crc kubenswrapper[4924]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:53:06 crc kubenswrapper[4924]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 11 13:53:06 crc kubenswrapper[4924]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.635717 4924 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639143 4924 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639163 4924 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639168 4924 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639172 4924 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639176 4924 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639180 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639184 4924 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639188 4924 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639191 4924 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639195 4924 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639200 4924 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639205 4924 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639211 4924 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639217 4924 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639223 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639228 4924 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639232 4924 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639242 4924 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639247 4924 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639251 4924 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639255 4924 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639259 4924 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639264 4924 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639269 4924 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639272 4924 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639276 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639280 4924 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639284 4924 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639288 4924 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639292 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639295 4924 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639299 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639303 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639307 4924 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639313 4924 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639316 4924 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639320 4924 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639340 4924 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639344 4924 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639348 4924 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639351 4924 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639355 4924 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639358 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639361 4924 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639365 4924 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639369 4924 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639372 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639376 4924 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639380 4924 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639383 4924 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639387 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639390 4924 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639394 4924 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639397 4924 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639401 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639405 4924 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639409 4924 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639413 4924 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639416 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639419 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639424 4924 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639428 4924 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639432 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639436 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639440 4924 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639444 4924 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639447 4924 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639451 4924 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639454 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639458 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.639462 4924 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639776 4924 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639790 4924 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639800 4924 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639806 4924 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639812 4924 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639816 4924 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639823 4924 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639829 4924 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639833 4924 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639838 4924 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639842 4924 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639846 4924 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639850 4924 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639854 4924 flags.go:64] FLAG: --cgroup-root="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639858 4924 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639862 4924 flags.go:64] FLAG: --client-ca-file="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639865 4924 flags.go:64] FLAG: --cloud-config="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639869 4924 flags.go:64] FLAG: --cloud-provider="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639873 4924 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639878 4924 flags.go:64] FLAG: --cluster-domain="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639882 4924 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639886 4924 flags.go:64] FLAG: --config-dir="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639890 4924 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639894 4924 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639900 4924 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639905 4924 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639909 4924 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639913 4924 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639918 4924 flags.go:64] FLAG: --contention-profiling="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639922 4924 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639926 4924 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639930 4924 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639934 4924 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639940 4924 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639944 4924 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639953 4924 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639958 4924 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639962 4924 flags.go:64] FLAG: --enable-server="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639966 4924 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639972 4924 flags.go:64] FLAG: --event-burst="100" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639976 4924 flags.go:64] FLAG: --event-qps="50" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639981 4924 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639985 4924 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639990 4924 flags.go:64] FLAG: --eviction-hard="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.639996 4924 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640000 4924 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640004 4924 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640008 4924 flags.go:64] FLAG: --eviction-soft="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640012 4924 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640015 4924 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640019 4924 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640023 4924 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640027 4924 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640030 4924 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640035 4924 flags.go:64] FLAG: --feature-gates="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640039 4924 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640043 4924 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640047 4924 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640051 4924 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640055 4924 flags.go:64] FLAG: --healthz-port="10248" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640059 4924 flags.go:64] FLAG: --help="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640063 4924 flags.go:64] FLAG: --hostname-override="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640067 4924 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640071 4924 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640075 4924 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640079 4924 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640083 4924 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640087 4924 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640091 4924 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640094 4924 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640098 4924 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640103 4924 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640109 4924 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640113 4924 flags.go:64] FLAG: --kube-reserved="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640117 4924 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640122 4924 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640126 4924 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640130 4924 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640134 4924 flags.go:64] FLAG: --lock-file="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640137 4924 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640141 4924 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640145 4924 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640152 4924 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640155 4924 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640159 4924 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640163 4924 flags.go:64] FLAG: --logging-format="text" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640167 4924 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640175 4924 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640179 4924 flags.go:64] FLAG: --manifest-url="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640183 4924 flags.go:64] FLAG: --manifest-url-header="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640190 4924 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640195 4924 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640201 4924 flags.go:64] FLAG: --max-pods="110" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640205 4924 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640209 4924 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640213 4924 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640217 4924 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640221 4924 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640225 4924 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640229 4924 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640240 4924 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640244 4924 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640248 4924 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640251 4924 flags.go:64] FLAG: --pod-cidr="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640256 4924 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640262 4924 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640266 4924 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640271 4924 flags.go:64] FLAG: --pods-per-core="0" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640276 4924 flags.go:64] FLAG: --port="10250" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640280 4924 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640284 4924 flags.go:64] FLAG: --provider-id="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640288 4924 flags.go:64] FLAG: --qos-reserved="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640292 4924 flags.go:64] FLAG: --read-only-port="10255" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640296 4924 flags.go:64] FLAG: --register-node="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640300 4924 flags.go:64] FLAG: --register-schedulable="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640304 4924 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640319 4924 flags.go:64] FLAG: --registry-burst="10" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640339 4924 flags.go:64] FLAG: --registry-qps="5" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640343 4924 flags.go:64] FLAG: --reserved-cpus="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640347 4924 flags.go:64] FLAG: --reserved-memory="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640353 4924 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640358 4924 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640362 4924 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640366 4924 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640370 4924 flags.go:64] FLAG: --runonce="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640373 4924 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640378 4924 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640382 4924 flags.go:64] FLAG: --seccomp-default="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640386 4924 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640390 4924 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640395 4924 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640399 4924 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640404 4924 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640409 4924 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640413 4924 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640421 4924 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640426 4924 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640431 4924 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640436 4924 flags.go:64] FLAG: --system-cgroups="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640440 4924 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640446 4924 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640451 4924 flags.go:64] FLAG: --tls-cert-file="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640455 4924 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640461 4924 flags.go:64] FLAG: --tls-min-version="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640465 4924 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640470 4924 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640474 4924 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640478 4924 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640482 4924 flags.go:64] FLAG: --v="2" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640488 4924 flags.go:64] FLAG: --version="false" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640494 4924 flags.go:64] FLAG: --vmodule="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640499 4924 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.640503 4924 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640610 4924 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640615 4924 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640619 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640623 4924 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640627 4924 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640631 4924 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640635 4924 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640640 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640644 4924 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640649 4924 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640653 4924 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640658 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640663 4924 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640667 4924 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640673 4924 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640680 4924 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640685 4924 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640690 4924 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640695 4924 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640700 4924 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640704 4924 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640709 4924 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640715 4924 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640721 4924 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640726 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640732 4924 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640737 4924 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640742 4924 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640747 4924 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640751 4924 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640756 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640760 4924 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640765 4924 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640770 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640774 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640779 4924 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640783 4924 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640788 4924 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640792 4924 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640797 4924 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640801 4924 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640806 4924 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640812 4924 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640819 4924 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640824 4924 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640829 4924 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640836 4924 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640843 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640848 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640853 4924 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640859 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640864 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640869 4924 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640875 4924 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640879 4924 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640883 4924 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640887 4924 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640892 4924 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640896 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640900 4924 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640904 4924 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640908 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640913 4924 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640918 4924 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640922 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640927 4924 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640931 4924 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640935 4924 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640939 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640944 4924 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.640947 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.641139 4924 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.652855 4924 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.652912 4924 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653032 4924 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653042 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653047 4924 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653051 4924 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653056 4924 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653061 4924 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653066 4924 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653072 4924 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653077 4924 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653081 4924 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653086 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653090 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653094 4924 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653099 4924 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653104 4924 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653109 4924 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653114 4924 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653118 4924 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653123 4924 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653128 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653134 4924 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653139 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653142 4924 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653146 4924 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653150 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653154 4924 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653157 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653160 4924 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653164 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653168 4924 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653172 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653177 4924 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653182 4924 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653187 4924 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653195 4924 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653199 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653204 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653209 4924 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653213 4924 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653220 4924 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653228 4924 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653233 4924 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653238 4924 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653243 4924 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653249 4924 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653256 4924 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653261 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653267 4924 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653272 4924 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653276 4924 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653281 4924 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653287 4924 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653292 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653297 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653302 4924 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653307 4924 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653312 4924 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653315 4924 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653320 4924 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653341 4924 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653346 4924 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653351 4924 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653356 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653363 4924 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653370 4924 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653374 4924 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653380 4924 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653384 4924 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653389 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653393 4924 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653398 4924 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.653405 4924 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653556 4924 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653564 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653569 4924 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653573 4924 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653578 4924 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653581 4924 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653585 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653589 4924 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653593 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653597 4924 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653601 4924 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653607 4924 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653611 4924 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653616 4924 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653620 4924 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653624 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653629 4924 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653636 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653640 4924 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653644 4924 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653647 4924 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653652 4924 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653656 4924 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653661 4924 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653665 4924 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653669 4924 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653673 4924 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653676 4924 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653680 4924 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653685 4924 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653690 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653693 4924 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653697 4924 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653701 4924 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653706 4924 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653710 4924 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653714 4924 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653718 4924 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653722 4924 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653725 4924 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653729 4924 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653734 4924 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653738 4924 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653744 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653748 4924 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653753 4924 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653758 4924 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653762 4924 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653766 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653770 4924 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653775 4924 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653779 4924 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653784 4924 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653788 4924 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653792 4924 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653795 4924 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653799 4924 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653802 4924 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653806 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653810 4924 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653813 4924 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653817 4924 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653821 4924 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653824 4924 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653828 4924 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653831 4924 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653835 4924 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653838 4924 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653842 4924 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653846 4924 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.653850 4924 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.653855 4924 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.654053 4924 server.go:940] "Client rotation is on, will bootstrap in background" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.659659 4924 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.659795 4924 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.660369 4924 server.go:997] "Starting client certificate rotation" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.660396 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.660781 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-23 08:50:23.636331969 +0000 UTC Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.660900 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.664566 4924 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.666301 4924 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.666901 4924 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.674720 4924 log.go:25] "Validated CRI v1 runtime API" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.688845 4924 log.go:25] "Validated CRI v1 image API" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.690778 4924 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.693505 4924 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-11-13-47-59-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.693544 4924 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.708863 4924 manager.go:217] Machine: {Timestamp:2025-12-11 13:53:06.707705468 +0000 UTC m=+0.217186465 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c872b68c-6ac6-4941-bce1-6e21ecaf912d BootID:13f79ec0-167e-4d1b-a988-47bfc5368a31 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8c:7d:57 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8c:7d:57 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:03:6b:23 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:42:0c:da Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:35:cf:92 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:19:1e:72 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:56:0c:0c:80:f7:98 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:6c:d8:6e:6e:2e Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.709132 4924 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.709265 4924 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.709745 4924 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.709902 4924 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.709939 4924 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.710161 4924 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.710171 4924 container_manager_linux.go:303] "Creating device plugin manager" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.710312 4924 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.710360 4924 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.710606 4924 state_mem.go:36] "Initialized new in-memory state store" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.710706 4924 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.711236 4924 kubelet.go:418] "Attempting to sync node with API server" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.711261 4924 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.711291 4924 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.711308 4924 kubelet.go:324] "Adding apiserver pod source" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.711345 4924 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.716102 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.716245 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.716247 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.716562 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.717967 4924 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.718472 4924 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720088 4924 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720880 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720933 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720945 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720956 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720974 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.720993 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721003 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721063 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721080 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721094 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721113 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721127 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.721464 4924 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.722166 4924 server.go:1280] "Started kubelet" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.722827 4924 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.723388 4924 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.723569 4924 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.723366 4924 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 13:53:06 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.726476 4924 server.go:460] "Adding debug handlers to kubelet server" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.726548 4924 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.726600 4924 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.727352 4924 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:46:33.673114036 +0000 UTC Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.727862 4924 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.727923 4924 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.727933 4924 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.728028 4924 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.730287 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.730386 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.730581 4924 factory.go:55] Registering systemd factory Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.730617 4924 factory.go:221] Registration of the systemd container factory successfully Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.730764 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.730564 4924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18802d98734fbf00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:53:06.722119424 +0000 UTC m=+0.231600441,LastTimestamp:2025-12-11 13:53:06.722119424 +0000 UTC m=+0.231600441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.733437 4924 factory.go:153] Registering CRI-O factory Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.733465 4924 factory.go:221] Registration of the crio container factory successfully Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.733555 4924 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.733594 4924 factory.go:103] Registering Raw factory Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.733613 4924 manager.go:1196] Started watching for new ooms in manager Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.734362 4924 manager.go:319] Starting recovery of all containers Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.745564 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.745683 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.745718 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.747174 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749192 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749219 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749234 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749249 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749273 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749289 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749345 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.749361 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751535 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751619 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751635 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751650 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751665 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751678 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751696 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751710 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751726 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751743 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751759 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751774 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751790 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751805 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751824 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751839 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751858 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751876 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751891 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751906 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751921 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.751984 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752001 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752015 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752029 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752045 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752060 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752076 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752092 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752108 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752121 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752137 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752152 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752168 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752182 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752198 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752214 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752253 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752270 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752286 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752309 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752341 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752359 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752374 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752389 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752403 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752419 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752434 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752456 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752470 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752484 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752500 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752516 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752530 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752546 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752561 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752576 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752590 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752605 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752619 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752635 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752655 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752669 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752686 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752699 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752714 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752728 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752744 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752758 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752772 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752788 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752801 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752815 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752830 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752845 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752860 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752872 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752915 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752937 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752951 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752965 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752980 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.752995 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753068 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753129 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753144 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753161 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753175 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753194 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753206 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753220 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753233 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753254 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753271 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753286 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753299 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753316 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753350 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753370 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753388 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753403 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753422 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753438 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753454 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753469 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753484 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753500 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753517 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753534 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753549 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753563 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753579 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753595 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753610 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753630 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753645 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753657 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753671 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753687 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753702 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753719 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753734 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753800 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753820 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753834 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753847 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753862 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753881 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753895 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753909 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753922 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753937 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753982 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.753999 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754023 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754039 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754053 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754086 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754100 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754116 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754132 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754144 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754162 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754177 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754190 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754203 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754217 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754232 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754245 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754258 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754274 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754289 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754850 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754880 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754895 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754908 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.754923 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756745 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756763 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756776 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756792 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756804 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756817 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756830 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756842 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756854 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756870 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756884 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756895 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756908 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756922 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756936 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756952 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756970 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756984 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.756998 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757012 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757025 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757070 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757086 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757100 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757114 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757140 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757153 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757169 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757183 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.757198 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.758120 4924 manager.go:324] Recovery completed Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759056 4924 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759131 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759153 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759166 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759180 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759194 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759210 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759222 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759235 4924 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759247 4924 reconstruct.go:97] "Volume reconstruction finished" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.759255 4924 reconciler.go:26] "Reconciler: start to sync state" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.770131 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.771900 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.771960 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.771974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.775112 4924 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.775208 4924 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.775289 4924 state_mem.go:36] "Initialized new in-memory state store" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.777814 4924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.781477 4924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.781605 4924 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.781708 4924 kubelet.go:2335] "Starting kubelet main sync loop" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.781829 4924 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.828026 4924 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 13:53:06 crc kubenswrapper[4924]: W1211 13:53:06.877266 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.877559 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.883022 4924 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.891905 4924 policy_none.go:49] "None policy: Start" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.893254 4924 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.893305 4924 state_mem.go:35] "Initializing new in-memory state store" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.929017 4924 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.931963 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.951613 4924 manager.go:334] "Starting Device Plugin manager" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.951664 4924 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.951676 4924 server.go:79] "Starting device plugin registration server" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.952094 4924 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.952112 4924 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.952316 4924 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.952409 4924 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 13:53:06 crc kubenswrapper[4924]: I1211 13:53:06.952419 4924 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 13:53:06 crc kubenswrapper[4924]: E1211 13:53:06.961131 4924 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.052255 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.053563 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.053611 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.053622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.053649 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.054260 4924 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.084045 4924 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.084161 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.085544 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.085605 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.085619 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.085829 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.086104 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.086160 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.086998 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087063 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087167 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087175 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087187 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087463 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.087538 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088189 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088214 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088500 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088608 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088646 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088655 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088826 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.088870 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.092555 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.092599 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.092617 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.092935 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.092953 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.092993 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.093018 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.093388 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.093524 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.094750 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.094778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.094791 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.094963 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.095001 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.095921 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.095956 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.095970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.095930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.096038 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.096054 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.163926 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164039 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164140 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164211 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164280 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164313 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164359 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164384 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164420 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164490 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164584 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164640 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164704 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164721 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.164742 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.255180 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.256552 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.256591 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.256600 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.256624 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.257086 4924 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266404 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266547 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266734 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266781 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266608 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266608 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.266980 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267072 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267174 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267225 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267275 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267338 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267383 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267420 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267456 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267481 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267490 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267520 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267532 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267551 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267558 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267573 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267578 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267595 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267597 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267623 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267634 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267610 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.267708 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.268252 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.333716 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.428100 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.440588 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.463065 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-fd891c687b68af2bf3b141185b64d896c519479c6a20b5fc280a96d111140a2a WatchSource:0}: Error finding container fd891c687b68af2bf3b141185b64d896c519479c6a20b5fc280a96d111140a2a: Status 404 returned error can't find the container with id fd891c687b68af2bf3b141185b64d896c519479c6a20b5fc280a96d111140a2a Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.465097 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.473921 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.480985 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a65f9991128b578c676d04b395be2c7a48c0d5166a1f260346f8b5a7cdec2110 WatchSource:0}: Error finding container a65f9991128b578c676d04b395be2c7a48c0d5166a1f260346f8b5a7cdec2110: Status 404 returned error can't find the container with id a65f9991128b578c676d04b395be2c7a48c0d5166a1f260346f8b5a7cdec2110 Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.480991 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.496383 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-abf563d4dbb66280d249698753f0bba42f9c0545cd5d6270894fde1d4e0574b8 WatchSource:0}: Error finding container abf563d4dbb66280d249698753f0bba42f9c0545cd5d6270894fde1d4e0574b8: Status 404 returned error can't find the container with id abf563d4dbb66280d249698753f0bba42f9c0545cd5d6270894fde1d4e0574b8 Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.497597 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bf8ac9389328b5375ff32029c08c5d987462ce9970777b6162bbbde4b7f1cff7 WatchSource:0}: Error finding container bf8ac9389328b5375ff32029c08c5d987462ce9970777b6162bbbde4b7f1cff7: Status 404 returned error can't find the container with id bf8ac9389328b5375ff32029c08c5d987462ce9970777b6162bbbde4b7f1cff7 Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.657769 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.659307 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.659361 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.659372 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.659397 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.659913 4924 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.708520 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.708630 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.724952 4924 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.727923 4924 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 04:26:54.413951079 +0000 UTC Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.749762 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.749847 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.786179 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a65f9991128b578c676d04b395be2c7a48c0d5166a1f260346f8b5a7cdec2110"} Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.787451 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fd891c687b68af2bf3b141185b64d896c519479c6a20b5fc280a96d111140a2a"} Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.788647 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"39396e5e75e630771baba87affcfb8eb3f4bb2dcf6623b903702f992fbef555b"} Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.789459 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"abf563d4dbb66280d249698753f0bba42f9c0545cd5d6270894fde1d4e0574b8"} Dec 11 13:53:07 crc kubenswrapper[4924]: I1211 13:53:07.790602 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bf8ac9389328b5375ff32029c08c5d987462ce9970777b6162bbbde4b7f1cff7"} Dec 11 13:53:07 crc kubenswrapper[4924]: W1211 13:53:07.845718 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:07 crc kubenswrapper[4924]: E1211 13:53:07.845843 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:08 crc kubenswrapper[4924]: W1211 13:53:08.032235 4924 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:08 crc kubenswrapper[4924]: E1211 13:53:08.032376 4924 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:08 crc kubenswrapper[4924]: E1211 13:53:08.135137 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.460420 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.464225 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.464259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.464268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.464290 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:08 crc kubenswrapper[4924]: E1211 13:53:08.464844 4924 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.69:6443: connect: connection refused" node="crc" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.724835 4924 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.69:6443: connect: connection refused Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.728962 4924 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:07:45.886718276 +0000 UTC Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.796717 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.797037 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.797142 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.797219 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.796823 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.797997 4924 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e" exitCode=0 Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.798072 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.798212 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.798535 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.798586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.798600 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.799118 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.799164 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.799173 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.800237 4924 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4cba992df5b9219ec98e2e42fa684c0cf2bbf4547d386ad145431956b68aee70" exitCode=0 Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.800391 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.800428 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4cba992df5b9219ec98e2e42fa684c0cf2bbf4547d386ad145431956b68aee70"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.800556 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801209 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801247 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801299 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801616 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801643 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.801655 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:08 crc kubenswrapper[4924]: E1211 13:53:08.802115 4924 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.69:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.803041 4924 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="39154bfcec8f9ab3b6f9300864b73e3dcb6ae3f5b455c430e3171a23f958a9e2" exitCode=0 Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.803194 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"39154bfcec8f9ab3b6f9300864b73e3dcb6ae3f5b455c430e3171a23f958a9e2"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.803229 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.804229 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.804251 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.804262 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.806031 4924 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a" exitCode=0 Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.806127 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.806127 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a"} Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.807291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.807400 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:08 crc kubenswrapper[4924]: I1211 13:53:08.807476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:09 crc kubenswrapper[4924]: E1211 13:53:09.458082 4924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18802d98734fbf00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:53:06.722119424 +0000 UTC m=+0.231600441,LastTimestamp:2025-12-11 13:53:06.722119424 +0000 UTC m=+0.231600441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.729198 4924 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:11:12.31214822 +0000 UTC Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.729318 4924 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 728h18m2.582833534s for next certificate rotation Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.811610 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.811677 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.811690 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.811700 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.812964 4924 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b261b45eec39778ff68af5eec9249e0e3558ac5d37233ffc1f6448ff0ad614d9" exitCode=0 Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.813022 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b261b45eec39778ff68af5eec9249e0e3558ac5d37233ffc1f6448ff0ad614d9"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.813154 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.813972 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.813994 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.814004 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.816015 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9479027dcab2a78c954be8374f27030f4e974e9a705da4d77cca1bf546929c05"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.816092 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.816836 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.816856 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.816863 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.819177 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.819563 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.819850 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.819878 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.819887 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca"} Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.820143 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.820210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.820226 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.820650 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.820665 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:09 crc kubenswrapper[4924]: I1211 13:53:09.820674 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.065823 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.067140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.067194 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.067207 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.067238 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.827095 4924 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c0b680c26a54870352206907c7e627fdebe22d7e9d01406d28ae63bc7aef98be" exitCode=0 Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.827197 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c0b680c26a54870352206907c7e627fdebe22d7e9d01406d28ae63bc7aef98be"} Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.827528 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.830917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.830968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.830984 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.835992 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.836451 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6"} Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.836529 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.836617 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.836661 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837085 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837124 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837138 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837634 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.837976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.838004 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.838024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.838693 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.838827 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.840128 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.840172 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:10 crc kubenswrapper[4924]: I1211 13:53:10.840189 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.212696 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842556 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55f3969dfa2b059bd160be81a582586cd451366935585db5d78ce3ba11fa0e78"} Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842632 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9f16cce4dab4c6291e6beabb631907b2f1157aa7c7ca8a185d9d3084a5cef254"} Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842651 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"38c1736644bacfcc11ac5b561980eb74201f152c42a28c8d9557009e5f91e847"} Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842664 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b20e783fafccdd7012230bee9cd575f303a9c5488c3253f8e470876ebf90e7ca"} Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842676 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2ec27fdd325767098bbaa94839e2c2fe617b53a9961ae2063af4c71f0b9e78b"} Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842711 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842752 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842709 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842904 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.842991 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844205 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844228 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844237 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844319 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844368 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844380 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.844999 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.845024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.845032 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.845495 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.845523 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.845536 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.929476 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.933462 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:11 crc kubenswrapper[4924]: I1211 13:53:11.940406 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.357268 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.851001 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.851105 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.852219 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.853534 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.853781 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.853968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.854024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.854109 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.853657 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.854611 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.854799 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:12 crc kubenswrapper[4924]: I1211 13:53:12.854990 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.192293 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.386665 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.838904 4924 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.839275 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.853772 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.853865 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.853772 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855469 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855517 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855699 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855722 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855846 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855900 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.855916 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:13 crc kubenswrapper[4924]: I1211 13:53:13.883748 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.856737 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.856737 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.858419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.858476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.858489 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.858489 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.858531 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:14 crc kubenswrapper[4924]: I1211 13:53:14.858548 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:16 crc kubenswrapper[4924]: E1211 13:53:16.961286 4924 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 13:53:19 crc kubenswrapper[4924]: I1211 13:53:19.725371 4924 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 13:53:19 crc kubenswrapper[4924]: E1211 13:53:19.735669 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 11 13:53:20 crc kubenswrapper[4924]: E1211 13:53:20.068898 4924 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 11 13:53:20 crc kubenswrapper[4924]: I1211 13:53:20.325457 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 13:53:20 crc kubenswrapper[4924]: I1211 13:53:20.325543 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 13:53:20 crc kubenswrapper[4924]: I1211 13:53:20.432668 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 13:53:20 crc kubenswrapper[4924]: I1211 13:53:20.432721 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 13:53:20 crc kubenswrapper[4924]: I1211 13:53:20.444203 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 13:53:20 crc kubenswrapper[4924]: I1211 13:53:20.444323 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.218348 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.218536 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.220658 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.220692 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.220705 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.828100 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.828398 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.829560 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.829583 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.829595 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.850314 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.874834 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.875695 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.875742 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.875755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.887415 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.935864 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.936037 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.937064 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.937097 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.937107 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:21 crc kubenswrapper[4924]: I1211 13:53:21.941355 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.877212 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.877229 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.878113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.878173 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.878188 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.878194 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.878214 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:22 crc kubenswrapper[4924]: I1211 13:53:22.878311 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.269895 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.271187 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.271219 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.271228 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.271251 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:23 crc kubenswrapper[4924]: E1211 13:53:23.274686 4924 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.839500 4924 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 13:53:23 crc kubenswrapper[4924]: I1211 13:53:23.839579 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.430307 4924 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.432509 4924 trace.go:236] Trace[1687177786]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:53:10.826) (total time: 14605ms): Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1687177786]: ---"Objects listed" error: 14605ms (13:53:25.432) Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1687177786]: [14.605996498s] [14.605996498s] END Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.432534 4924 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.432611 4924 trace.go:236] Trace[1682929265]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:53:10.867) (total time: 14564ms): Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1682929265]: ---"Objects listed" error: 14564ms (13:53:25.432) Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1682929265]: [14.56488333s] [14.56488333s] END Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.432627 4924 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.434045 4924 trace.go:236] Trace[1907332251]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:53:10.474) (total time: 14959ms): Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1907332251]: ---"Objects listed" error: 14959ms (13:53:25.433) Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1907332251]: [14.959210287s] [14.959210287s] END Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.434063 4924 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.435381 4924 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.435778 4924 trace.go:236] Trace[1077101423]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:53:10.554) (total time: 14880ms): Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1077101423]: ---"Objects listed" error: 14880ms (13:53:25.435) Dec 11 13:53:25 crc kubenswrapper[4924]: Trace[1077101423]: [14.880803754s] [14.880803754s] END Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.435801 4924 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.454634 4924 csr.go:261] certificate signing request csr-sjv4l is approved, waiting to be issued Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.462506 4924 csr.go:257] certificate signing request csr-sjv4l is issued Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.720484 4924 apiserver.go:52] "Watching apiserver" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.725304 4924 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.725504 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-x9vcv","openshift-machine-config-operator/machine-config-daemon-rfwqf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.725810 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.725923 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.725999 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.726021 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.726050 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.726076 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.726396 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.726446 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.726053 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.726511 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.726514 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.728990 4924 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.729530 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.729541 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.729598 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.729612 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.729535 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.729630 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.731113 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732289 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732334 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732355 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732373 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732390 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732408 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732424 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732444 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732471 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732489 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732506 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732522 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732539 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732555 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732572 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732589 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732649 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732678 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732702 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732725 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732746 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732755 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732767 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732834 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732867 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732869 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732872 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732939 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732960 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.732984 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733003 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733012 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733020 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733053 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733069 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733087 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733096 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733104 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733122 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733130 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733125 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733173 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733137 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733240 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733251 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733270 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733289 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733294 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733336 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733342 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733360 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733385 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733401 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733419 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733435 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733452 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733471 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733487 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733504 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733522 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733538 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733558 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733578 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733597 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733614 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733632 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733649 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733665 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733681 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733697 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733713 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733732 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733748 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733771 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733788 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733806 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733822 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733837 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733853 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733871 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733888 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733903 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733921 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733937 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733954 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733971 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733988 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734013 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734029 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734047 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734063 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734081 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734096 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734112 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734132 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734149 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734226 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734243 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734258 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734274 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734290 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734306 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734336 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734359 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734377 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734393 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734409 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734427 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734445 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734463 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734478 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734497 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734514 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734530 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734546 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734561 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734578 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734599 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734620 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734640 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734661 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734682 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734703 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734722 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734739 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734754 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734772 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734790 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734804 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734820 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734837 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734854 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734872 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734888 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734903 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734924 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734939 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734956 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734972 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734987 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735003 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735019 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735037 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735055 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735072 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735091 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735108 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735123 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735140 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735157 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735173 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735190 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735205 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735221 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735236 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735252 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735268 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735285 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735300 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735339 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735361 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735377 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735396 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735412 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735429 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735446 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735467 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735486 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735503 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735519 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735539 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735556 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735573 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735592 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735610 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735629 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735646 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735667 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735684 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735702 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735718 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735736 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735753 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735772 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735789 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735804 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735823 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735840 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735858 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735876 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735893 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735915 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735933 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735952 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735970 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735987 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736004 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736041 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736059 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736077 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736095 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736130 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736187 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736205 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736225 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736241 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736258 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736298 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5m66\" (UniqueName: \"kubernetes.io/projected/b5cac4fc-9d62-4680-9f70-650c4c118a9e-kube-api-access-w5m66\") pod \"node-resolver-x9vcv\" (UID: \"b5cac4fc-9d62-4680-9f70-650c4c118a9e\") " pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736335 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736355 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736375 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736394 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736415 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736436 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736456 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736472 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736489 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b5cac4fc-9d62-4680-9f70-650c4c118a9e-hosts-file\") pod \"node-resolver-x9vcv\" (UID: \"b5cac4fc-9d62-4680-9f70-650c4c118a9e\") " pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736507 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fafc4b5e-18de-4683-b008-775c510f12bf-proxy-tls\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736527 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fafc4b5e-18de-4683-b008-775c510f12bf-rootfs\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736549 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736568 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736585 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736610 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736634 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fafc4b5e-18de-4683-b008-775c510f12bf-mcd-auth-proxy-config\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736652 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736674 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736690 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n98m\" (UniqueName: \"kubernetes.io/projected/fafc4b5e-18de-4683-b008-775c510f12bf-kube-api-access-8n98m\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736737 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736749 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736759 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736769 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736778 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736789 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736798 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736829 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736840 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736851 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736860 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733345 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733443 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733498 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733507 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733533 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733619 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733670 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733698 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733739 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733833 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733846 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733936 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.733973 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734128 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734276 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734313 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734376 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734398 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734413 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734446 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734500 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734574 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.737175 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734713 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734823 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734841 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734912 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734920 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734932 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.734948 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735347 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.735607 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736088 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736403 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736417 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736636 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736693 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.736765 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.737017 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.737084 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.737259 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.737469 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.737501 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738167 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738241 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738295 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738466 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738726 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738860 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.738949 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739003 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739309 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739463 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739556 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739569 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739653 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739731 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739785 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739749 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.740485 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.740637 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.740721 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.740797 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:53:26.240775387 +0000 UTC m=+19.750256364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.740798 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.740914 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.741401 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.741598 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.752544 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.752635 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.752787 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.752809 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753054 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753175 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753444 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753533 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753576 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753597 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753643 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.753739 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754182 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754260 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754472 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754519 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754514 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754710 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.739742 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754762 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.754816 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755074 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755122 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755381 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755424 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755487 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755718 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755793 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.755826 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756085 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756240 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756337 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756501 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756583 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756773 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756847 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.756968 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.757259 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.757313 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.757782 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.757795 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.758027 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.758060 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.757807 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.758229 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:26.258211655 +0000 UTC m=+19.767692632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.758459 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.759423 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.759752 4924 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.760389 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.760705 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.760767 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.760849 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.760907 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.764776 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.764878 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:26.264850396 +0000 UTC m=+19.774331373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.765433 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.757450 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.773606 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.773660 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.774366 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.774623 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.774824 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.774863 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.777970 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.778093 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.778109 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.778127 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.778184 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:26.278167579 +0000 UTC m=+19.787648556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.778310 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.778905 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.780395 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.780837 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.780852 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.781126 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.781253 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.781275 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.781294 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:25 crc kubenswrapper[4924]: E1211 13:53:25.781364 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:26.281348215 +0000 UTC m=+19.790829192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.781461 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.781751 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.781955 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.782239 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.782314 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.782627 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783044 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783153 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783187 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783074 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783350 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783382 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783502 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783597 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783795 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.783854 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784041 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784127 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784256 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784318 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784356 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784450 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784523 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.784629 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.785273 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.785931 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.786295 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.786515 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.786679 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.786904 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.787072 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.787197 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.787360 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.787547 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.787609 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.792000 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.792923 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.795763 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.795960 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.797309 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.797657 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.798139 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.803994 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.806006 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.811013 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.815068 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.815189 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.815317 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.815494 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.815578 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.815901 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.819354 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.822372 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.825689 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.826112 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.827014 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.834434 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.834821 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.835260 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.835742 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.835913 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.836446 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.836696 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.836819 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837135 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837473 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837582 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837708 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837750 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837794 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837949 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b5cac4fc-9d62-4680-9f70-650c4c118a9e-hosts-file\") pod \"node-resolver-x9vcv\" (UID: \"b5cac4fc-9d62-4680-9f70-650c4c118a9e\") " pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.837981 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fafc4b5e-18de-4683-b008-775c510f12bf-proxy-tls\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838024 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fafc4b5e-18de-4683-b008-775c510f12bf-rootfs\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838056 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fafc4b5e-18de-4683-b008-775c510f12bf-mcd-auth-proxy-config\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838103 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838123 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n98m\" (UniqueName: \"kubernetes.io/projected/fafc4b5e-18de-4683-b008-775c510f12bf-kube-api-access-8n98m\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838147 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5m66\" (UniqueName: \"kubernetes.io/projected/b5cac4fc-9d62-4680-9f70-650c4c118a9e-kube-api-access-w5m66\") pod \"node-resolver-x9vcv\" (UID: \"b5cac4fc-9d62-4680-9f70-650c4c118a9e\") " pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838187 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838219 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838263 4924 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838279 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838291 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838302 4924 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838313 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838342 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838352 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: W1211 13:53:25.838354 4924 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838372 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838360 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838403 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838444 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838458 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838472 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838486 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838523 4924 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838537 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838542 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b5cac4fc-9d62-4680-9f70-650c4c118a9e-hosts-file\") pod \"node-resolver-x9vcv\" (UID: \"b5cac4fc-9d62-4680-9f70-650c4c118a9e\") " pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838551 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838563 4924 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838599 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838613 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838624 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838636 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838648 4924 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838684 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838698 4924 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838709 4924 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838721 4924 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838733 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838770 4924 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838784 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838796 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838807 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.838993 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839014 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839024 4924 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839036 4924 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839047 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839084 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839096 4924 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839106 4924 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839118 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839129 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839164 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839175 4924 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839186 4924 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839197 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839208 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839244 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839257 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839268 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839278 4924 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839289 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839342 4924 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839358 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839368 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839406 4924 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839421 4924 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839431 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839443 4924 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839456 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839495 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839508 4924 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839519 4924 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839530 4924 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839563 4924 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839579 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839590 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839601 4924 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839612 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839650 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839666 4924 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839679 4924 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839690 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839735 4924 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839751 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839761 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839773 4924 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839785 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839852 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839865 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839903 4924 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839919 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839930 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839942 4924 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839954 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839992 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840008 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840020 4924 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840034 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840045 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840057 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840067 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840077 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840090 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840101 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840113 4924 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840125 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840135 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840146 4924 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840157 4924 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840168 4924 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840179 4924 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840190 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840204 4924 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840246 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840258 4924 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840269 4924 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840282 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.839467 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: W1211 13:53:25.840365 4924 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~projected/kube-api-access-zgdk5 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840378 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840365 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840409 4924 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840421 4924 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840432 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840442 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840455 4924 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840468 4924 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840479 4924 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840492 4924 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840504 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840517 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840527 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840537 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840548 4924 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840559 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840570 4924 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840581 4924 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840592 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840602 4924 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840612 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840623 4924 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840634 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840645 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840655 4924 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840690 4924 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840703 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840704 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fafc4b5e-18de-4683-b008-775c510f12bf-rootfs\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840715 4924 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840758 4924 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840771 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: W1211 13:53:25.840774 4924 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840782 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840785 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840796 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840809 4924 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840820 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840832 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840842 4924 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840854 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840865 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840877 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840887 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840897 4924 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840908 4924 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840918 4924 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840929 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840940 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840951 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840962 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840971 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840984 4924 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.840995 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841004 4924 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841015 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841025 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841034 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841046 4924 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841059 4924 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841068 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841078 4924 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841087 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841100 4924 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841109 4924 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841119 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841131 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841141 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841149 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841160 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841169 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841494 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fafc4b5e-18de-4683-b008-775c510f12bf-proxy-tls\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841740 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:25 crc kubenswrapper[4924]: W1211 13:53:25.841806 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-57a43c614f92f03447f979ba47c4cef84f71868b0615c88b407fcc026f6cca40 WatchSource:0}: Error finding container 57a43c614f92f03447f979ba47c4cef84f71868b0615c88b407fcc026f6cca40: Status 404 returned error can't find the container with id 57a43c614f92f03447f979ba47c4cef84f71868b0615c88b407fcc026f6cca40 Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.841922 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.842201 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fafc4b5e-18de-4683-b008-775c510f12bf-mcd-auth-proxy-config\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.842371 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.847073 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.858792 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.859091 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n98m\" (UniqueName: \"kubernetes.io/projected/fafc4b5e-18de-4683-b008-775c510f12bf-kube-api-access-8n98m\") pod \"machine-config-daemon-rfwqf\" (UID: \"fafc4b5e-18de-4683-b008-775c510f12bf\") " pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.860473 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.860873 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.866882 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.869289 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5m66\" (UniqueName: \"kubernetes.io/projected/b5cac4fc-9d62-4680-9f70-650c4c118a9e-kube-api-access-w5m66\") pod \"node-resolver-x9vcv\" (UID: \"b5cac4fc-9d62-4680-9f70-650c4c118a9e\") " pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.870240 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.877083 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.883103 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54114->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.883154 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54114->192.168.126.11:17697: read: connection reset by peer" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.883520 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.883544 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.883553 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54120->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.883584 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:54120->192.168.126.11:17697: read: connection reset by peer" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.886666 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.888407 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.888718 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"57a43c614f92f03447f979ba47c4cef84f71868b0615c88b407fcc026f6cca40"} Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.898908 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.912684 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942695 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942815 4924 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942830 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942843 4924 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942856 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942867 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942899 4924 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942911 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942924 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.942934 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:53:25 crc kubenswrapper[4924]: I1211 13:53:25.998571 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-j8qls"] Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:25.999942 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wjmj7"] Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.000045 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.000112 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jnlw"] Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.000584 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.001681 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5vrtp"] Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.001784 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.002392 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.003459 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.003777 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.003957 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.007126 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.007255 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.007280 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.007307 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.007600 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.007813 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008188 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008257 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008349 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008460 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008509 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008539 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008584 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008615 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.008728 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.017538 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.028500 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043786 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-cnibin\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043829 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043856 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7jw\" (UniqueName: \"kubernetes.io/projected/3829d010-f239-43e9-9775-6dc41c5e83c6-kube-api-access-5j7jw\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043888 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-ovn\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043905 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/544b1b24-246d-42dc-83f2-b5cbd3b2e927-host\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043922 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-cni-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.043936 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-cni-bin\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044047 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-etc-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044096 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/544b1b24-246d-42dc-83f2-b5cbd3b2e927-serviceca\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044118 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-kubelet\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044136 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-conf-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044185 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-multus-certs\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044206 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-system-cni-dir\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044225 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-kubelet\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044246 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-log-socket\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044270 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8d9c\" (UniqueName: \"kubernetes.io/projected/47432eab-9072-43ce-9bf7-0dbd6fa271e7-kube-api-access-k8d9c\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044288 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-daemon-config\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044309 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-node-log\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044391 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhqj\" (UniqueName: \"kubernetes.io/projected/544b1b24-246d-42dc-83f2-b5cbd3b2e927-kube-api-access-qrhqj\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044415 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-cnibin\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044433 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-etc-kubernetes\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044473 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovn-node-metrics-cert\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044491 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-os-release\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044509 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4jt\" (UniqueName: \"kubernetes.io/projected/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-kube-api-access-wr4jt\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044528 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3829d010-f239-43e9-9775-6dc41c5e83c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044545 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-systemd\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044572 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-bin\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044592 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-slash\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044610 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-netns\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044628 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-cni-multus\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044646 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-hostroot\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044686 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044719 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-netd\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044757 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-config\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044820 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044838 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-netns\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044917 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3829d010-f239-43e9-9775-6dc41c5e83c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044950 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-script-lib\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.044997 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-cni-binary-copy\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045017 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-socket-dir-parent\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045036 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-os-release\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045059 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-systemd-units\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045079 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-var-lib-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045101 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-system-cni-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045317 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045384 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045402 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-env-overrides\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.045420 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-k8s-cni-cncf-io\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.057421 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.060570 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.074319 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.090597 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.101089 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.102473 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.112439 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.117511 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-993110175b0a72e9240d4171b6693065f3d8ec753ababac74037af9806fd620c WatchSource:0}: Error finding container 993110175b0a72e9240d4171b6693065f3d8ec753ababac74037af9806fd620c: Status 404 returned error can't find the container with id 993110175b0a72e9240d4171b6693065f3d8ec753ababac74037af9806fd620c Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.126840 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148656 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148739 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-slash\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148765 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-netns\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148787 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-cni-multus\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148805 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-hostroot\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148824 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148842 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-netd\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148859 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-config\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148878 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-netns\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148873 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-netns\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148904 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3829d010-f239-43e9-9775-6dc41c5e83c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148918 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-cni-multus\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148957 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-hostroot\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148959 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-netns\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148873 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-slash\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.148928 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-script-lib\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149029 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-netd\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149092 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149161 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-cni-binary-copy\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149232 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-socket-dir-parent\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149263 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-os-release\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149284 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-systemd-units\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149306 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-var-lib-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149343 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-system-cni-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149371 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149398 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149422 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-env-overrides\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149442 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-k8s-cni-cncf-io\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149466 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-cnibin\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149486 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149505 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j7jw\" (UniqueName: \"kubernetes.io/projected/3829d010-f239-43e9-9775-6dc41c5e83c6-kube-api-access-5j7jw\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149526 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-ovn\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149561 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/544b1b24-246d-42dc-83f2-b5cbd3b2e927-host\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149580 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-cni-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149601 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-cni-bin\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149621 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-etc-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149641 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/544b1b24-246d-42dc-83f2-b5cbd3b2e927-serviceca\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149661 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-kubelet\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149686 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-conf-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149716 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-multus-certs\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149736 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-system-cni-dir\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149750 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-config\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149796 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-kubelet\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149766 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-kubelet\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149798 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3829d010-f239-43e9-9775-6dc41c5e83c6-cni-binary-copy\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149838 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-script-lib\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149846 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-log-socket\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149879 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-etc-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149881 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8d9c\" (UniqueName: \"kubernetes.io/projected/47432eab-9072-43ce-9bf7-0dbd6fa271e7-kube-api-access-k8d9c\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149921 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-daemon-config\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149947 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-node-log\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149969 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhqj\" (UniqueName: \"kubernetes.io/projected/544b1b24-246d-42dc-83f2-b5cbd3b2e927-kube-api-access-qrhqj\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149989 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-cnibin\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150009 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-etc-kubernetes\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150048 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovn-node-metrics-cert\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150082 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-log-socket\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150070 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-os-release\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150152 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-system-cni-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150156 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-cni-binary-copy\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150154 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4jt\" (UniqueName: \"kubernetes.io/projected/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-kube-api-access-wr4jt\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150195 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3829d010-f239-43e9-9775-6dc41c5e83c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150219 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-systemd\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150223 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-ovn\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150241 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-bin\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150166 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-multus-certs\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150197 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-system-cni-dir\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.149919 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-conf-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150353 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x9vcv" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150495 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/544b1b24-246d-42dc-83f2-b5cbd3b2e927-host\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150719 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/544b1b24-246d-42dc-83f2-b5cbd3b2e927-serviceca\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150777 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-os-release\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150795 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-bin\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150812 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-cni-bin\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150887 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-cni-dir\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150918 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150943 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-socket-dir-parent\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150945 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-run-k8s-cni-cncf-io\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150964 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-cnibin\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.150992 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151171 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-host-var-lib-kubelet\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151295 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-env-overrides\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151309 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-etc-kubernetes\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151346 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-systemd\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151358 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-node-log\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151391 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-cnibin\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151396 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-os-release\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151409 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-multus-daemon-config\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151411 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-systemd-units\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151429 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-var-lib-openvswitch\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151718 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3829d010-f239-43e9-9775-6dc41c5e83c6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.151743 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3829d010-f239-43e9-9775-6dc41c5e83c6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.155947 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.156840 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovn-node-metrics-cert\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.175008 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8d9c\" (UniqueName: \"kubernetes.io/projected/47432eab-9072-43ce-9bf7-0dbd6fa271e7-kube-api-access-k8d9c\") pod \"ovnkube-node-8jnlw\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.175967 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j7jw\" (UniqueName: \"kubernetes.io/projected/3829d010-f239-43e9-9775-6dc41c5e83c6-kube-api-access-5j7jw\") pod \"multus-additional-cni-plugins-j8qls\" (UID: \"3829d010-f239-43e9-9775-6dc41c5e83c6\") " pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.181858 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.188880 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhqj\" (UniqueName: \"kubernetes.io/projected/544b1b24-246d-42dc-83f2-b5cbd3b2e927-kube-api-access-qrhqj\") pod \"node-ca-wjmj7\" (UID: \"544b1b24-246d-42dc-83f2-b5cbd3b2e927\") " pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.193767 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4jt\" (UniqueName: \"kubernetes.io/projected/5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c-kube-api-access-wr4jt\") pod \"multus-5vrtp\" (UID: \"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\") " pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.196925 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.208023 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.225378 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.244626 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.252364 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.259740 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:53:27.259708298 +0000 UTC m=+20.769189275 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.266632 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.290504 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.303181 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.317188 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.332037 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.333031 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-j8qls" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.345058 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wjmj7" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.353776 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.354147 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.360654 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5vrtp" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.360982 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.361244 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.361270 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361296 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361438 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361465 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361479 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361549 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361438 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:27.361421448 +0000 UTC m=+20.870902425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.361299 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361593 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:27.361577673 +0000 UTC m=+20.871058660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361622 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361719 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:27.361681736 +0000 UTC m=+20.871162773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361750 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361778 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.361866 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:27.361844521 +0000 UTC m=+20.871325508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.366893 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3829d010_f239_43e9_9775_6dc41c5e83c6.slice/crio-4454b347452d507502f2db5dbab2cb66be8f40d7465ef18315c2d7c2e7570bce WatchSource:0}: Error finding container 4454b347452d507502f2db5dbab2cb66be8f40d7465ef18315c2d7c2e7570bce: Status 404 returned error can't find the container with id 4454b347452d507502f2db5dbab2cb66be8f40d7465ef18315c2d7c2e7570bce Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.390214 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cbdb7db_1aa3_4cc6_8a0d_9461af8e1b8c.slice/crio-077c7ffe76a7c9c3367d1c167115e8e804d839fb810d7e3e054c74198e73866e WatchSource:0}: Error finding container 077c7ffe76a7c9c3367d1c167115e8e804d839fb810d7e3e054c74198e73866e: Status 404 returned error can't find the container with id 077c7ffe76a7c9c3367d1c167115e8e804d839fb810d7e3e054c74198e73866e Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.393517 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47432eab_9072_43ce_9bf7_0dbd6fa271e7.slice/crio-e106210478db1204bd21cdc89005806723d773d95fbb0d3a2ee25194714f7df5 WatchSource:0}: Error finding container e106210478db1204bd21cdc89005806723d773d95fbb0d3a2ee25194714f7df5: Status 404 returned error can't find the container with id e106210478db1204bd21cdc89005806723d773d95fbb0d3a2ee25194714f7df5 Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.463657 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-11 13:48:25 +0000 UTC, rotation deadline is 2026-09-26 02:22:59.415739128 +0000 UTC Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.463718 4924 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6924h29m32.952023593s for next certificate rotation Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.661818 4924 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662403 4924 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662463 4924 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662491 4924 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662505 4924 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662514 4924 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662533 4924 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662534 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662542 4924 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662473 4924 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662553 4924 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662566 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662579 4924 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662587 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662607 4924 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662614 4924 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662627 4924 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662637 4924 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662646 4924 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662665 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662685 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662699 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662523 4924 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662703 4924 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662554 4924 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662483 4924 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662531 4924 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662763 4924 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662784 4924 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662803 4924 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662821 4924 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662491 4924 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662464 4924 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662667 4924 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662847 4924 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: W1211 13:53:26.662852 4924 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.782552 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:26 crc kubenswrapper[4924]: E1211 13:53:26.782675 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.786074 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.786766 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.787977 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.789490 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.792664 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.793751 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.794765 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.796492 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.797673 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.798211 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.799461 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.800413 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.802438 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.803304 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.804615 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.805370 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.806061 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.810394 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.810810 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.811552 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.812286 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.813116 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.813858 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.814896 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.815428 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.816695 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.818353 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.819716 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.821363 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.827542 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.828677 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.829780 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.829978 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.830635 4924 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.830765 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.832988 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.834247 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.834869 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.837512 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.838767 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.839433 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.840718 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.841607 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.842759 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.843529 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.844818 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.845134 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.845739 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.847021 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.848609 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.849946 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.851059 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.852228 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.852887 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.853902 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.854699 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.855453 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.856570 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.864704 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.877702 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.892344 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x9vcv" event={"ID":"b5cac4fc-9d62-4680-9f70-650c4c118a9e","Type":"ContainerStarted","Data":"ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.892390 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x9vcv" event={"ID":"b5cac4fc-9d62-4680-9f70-650c4c118a9e","Type":"ContainerStarted","Data":"1b08e26120517a2d713307dc73af6726ce620de8cbac9f02708a407129d4cd6d"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.893816 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" exitCode=0 Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.893878 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.893910 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"e106210478db1204bd21cdc89005806723d773d95fbb0d3a2ee25194714f7df5"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.895288 4924 generic.go:334] "Generic (PLEG): container finished" podID="3829d010-f239-43e9-9775-6dc41c5e83c6" containerID="7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8" exitCode=0 Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.895336 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerDied","Data":"7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.895661 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerStarted","Data":"4454b347452d507502f2db5dbab2cb66be8f40d7465ef18315c2d7c2e7570bce"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.897288 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.897344 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.897360 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"f8108eacf6b1d87c8af5ae4db67c366e13bb0922162d563cd21880670c7624ad"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.898990 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.899021 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7a9de5e0e36ecd59a267e4c3b5d6ea50e9614d22f90572d8a5be3614d62259cd"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.900218 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerStarted","Data":"ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.900257 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerStarted","Data":"077c7ffe76a7c9c3367d1c167115e8e804d839fb810d7e3e054c74198e73866e"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.901290 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wjmj7" event={"ID":"544b1b24-246d-42dc-83f2-b5cbd3b2e927","Type":"ContainerStarted","Data":"f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.901342 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wjmj7" event={"ID":"544b1b24-246d-42dc-83f2-b5cbd3b2e927","Type":"ContainerStarted","Data":"9bb8bb45e05f734fa1907860dcd6d74302d67401b7478897aef0afb34b00a6e6"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.902864 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.904222 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.905023 4924 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" exitCode=255 Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.905089 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.906225 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"993110175b0a72e9240d4171b6693065f3d8ec753ababac74037af9806fd620c"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.908633 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.908678 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41"} Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.930904 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.945458 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.954311 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.956025 4924 scope.go:117] "RemoveContainer" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" Dec 11 13:53:26 crc kubenswrapper[4924]: I1211 13:53:26.975652 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.013662 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.028966 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.042808 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.057310 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.068695 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.086135 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.098280 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.113896 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.125378 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.147015 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.163144 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.179139 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.194982 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.212437 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.244832 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.275386 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.275671 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:53:29.275594466 +0000 UTC m=+22.785075443 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.395257 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.395312 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.395367 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.395404 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.395520 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.395578 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:29.395561488 +0000 UTC m=+22.905042465 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.395927 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.395943 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.395955 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.395981 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:29.39597298 +0000 UTC m=+22.905453957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.396009 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.396033 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:29.396025672 +0000 UTC m=+22.905506649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.396083 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.396093 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.396101 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.396119 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:29.396113765 +0000 UTC m=+22.905594742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.475408 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.502125 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.505367 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.518070 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.564795 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.579096 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.580006 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.597976 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.618929 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.648454 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.704093 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.758556 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.769608 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.783397 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.783510 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.783807 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:27 crc kubenswrapper[4924]: E1211 13:53:27.783861 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.808744 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.841534 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.843557 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.879538 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.905604 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.914189 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.914230 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.914239 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.914248 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.914256 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.914976 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.916091 4924 generic.go:334] "Generic (PLEG): container finished" podID="3829d010-f239-43e9-9775-6dc41c5e83c6" containerID="414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef" exitCode=0 Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.916144 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerDied","Data":"414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.918690 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.921397 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4"} Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.924672 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.930976 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.945148 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.961022 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.970678 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.979615 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 13:53:27 crc kubenswrapper[4924]: I1211 13:53:27.980914 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.000903 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.003107 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.015076 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.033411 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.047885 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.059533 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.062305 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.063575 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.063814 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.069515 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.076035 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.093365 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.112534 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.120011 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.125286 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.127505 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.142687 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.170981 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.183024 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.222516 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.253035 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.262111 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.282436 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.303306 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.351747 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.389774 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.429286 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.474766 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.514472 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.552627 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.600270 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.632954 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.672798 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.714020 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.752513 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.783094 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:28 crc kubenswrapper[4924]: E1211 13:53:28.783320 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.930139 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845"} Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.942015 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.944041 4924 generic.go:334] "Generic (PLEG): container finished" podID="3829d010-f239-43e9-9775-6dc41c5e83c6" containerID="e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1" exitCode=0 Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.944237 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerDied","Data":"e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1"} Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.945039 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.951831 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.972216 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.983950 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:28 crc kubenswrapper[4924]: I1211 13:53:28.998826 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:28Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.015158 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.034505 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.045074 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.074782 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.110348 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.150030 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.192268 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.236944 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.271631 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.313711 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.313920 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:53:33.313903379 +0000 UTC m=+26.823384356 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.314211 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.351946 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.390867 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.415364 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.415428 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.415455 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.415487 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415559 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415621 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415633 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415680 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415696 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415721 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415641 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:33.415619478 +0000 UTC m=+26.925100455 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415643 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.415990 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.416062 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:33.415959498 +0000 UTC m=+26.925440475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.416098 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:33.416082472 +0000 UTC m=+26.925563539 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.416119 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:33.416110353 +0000 UTC m=+26.925591430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.435733 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.471638 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.517555 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.549332 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.600833 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.630388 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.675241 4924 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.675543 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.677435 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.677483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.677498 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.677626 4924 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.730285 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.744069 4924 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.744423 4924 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.745555 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.745606 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.745617 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.745633 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.745644 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.759750 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.763555 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.763586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.763596 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.763611 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.763624 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.776823 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.780608 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.780664 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.780673 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.780696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.780709 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.782467 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.782583 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.782488 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.782709 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.794898 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.799344 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.799381 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.799417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.799439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.799452 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.803157 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.815255 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.819276 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.819334 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.819342 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.819357 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.819367 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.832131 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.833545 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: E1211 13:53:29.833650 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.835186 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.835209 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.835217 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.835249 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.835259 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.938367 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.938427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.938463 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.938500 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.938526 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:29Z","lastTransitionTime":"2025-12-11T13:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.951202 4924 generic.go:334] "Generic (PLEG): container finished" podID="3829d010-f239-43e9-9775-6dc41c5e83c6" containerID="f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310" exitCode=0 Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.951294 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerDied","Data":"f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310"} Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.972790 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:29 crc kubenswrapper[4924]: I1211 13:53:29.992408 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:29Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.010566 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.028616 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.041577 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.041647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.041667 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.041690 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.041708 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.050959 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.104819 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.140158 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.154677 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.154729 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.154742 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.154762 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.154778 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.183928 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.196655 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.234133 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.257594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.257633 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.257642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.257659 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.257669 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.271775 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.311164 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.352756 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.360452 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.360482 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.360490 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.360502 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.360511 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.463523 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.463581 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.463595 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.463615 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.463626 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.565420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.565450 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.565458 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.565471 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.565480 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.668636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.668695 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.668709 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.668728 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.668744 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.771431 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.771488 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.771500 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.771516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.771530 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.782453 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:30 crc kubenswrapper[4924]: E1211 13:53:30.782654 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.846474 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.854525 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.862176 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.874314 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.874374 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.874390 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.874405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.874420 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.953268 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.967698 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.972634 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.977684 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.977722 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.977736 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.977755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.977769 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:30Z","lastTransitionTime":"2025-12-11T13:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.978547 4924 generic.go:334] "Generic (PLEG): container finished" podID="3829d010-f239-43e9-9775-6dc41c5e83c6" containerID="66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa" exitCode=0 Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.978947 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerDied","Data":"66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa"} Dec 11 13:53:30 crc kubenswrapper[4924]: I1211 13:53:30.987169 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:30Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.003360 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.017573 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.030954 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.050980 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.065104 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.081696 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.082411 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.082435 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.082445 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.082460 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.082470 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.103509 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.117956 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.129573 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.143080 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.166487 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.181720 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.185376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.185407 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.185417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.185433 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.185444 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.192873 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.202034 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.213162 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.225986 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.239355 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.261341 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.273194 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.288738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.288780 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.288789 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.288809 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.288820 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.295224 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.332018 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.376627 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.390804 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.390849 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.390861 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.390879 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.390890 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.409676 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.451408 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.493541 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.493593 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.493604 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.493622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.493635 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.595706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.595739 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.595747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.595761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.595769 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.698072 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.698106 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.698116 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.698132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.698142 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.782400 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.782430 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:31 crc kubenswrapper[4924]: E1211 13:53:31.782594 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:31 crc kubenswrapper[4924]: E1211 13:53:31.782744 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.800546 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.800593 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.800604 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.800622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.800633 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.904564 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.904639 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.904660 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.904690 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.904711 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:31Z","lastTransitionTime":"2025-12-11T13:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.987628 4924 generic.go:334] "Generic (PLEG): container finished" podID="3829d010-f239-43e9-9775-6dc41c5e83c6" containerID="0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e" exitCode=0 Dec 11 13:53:31 crc kubenswrapper[4924]: I1211 13:53:31.987701 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerDied","Data":"0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.008417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.008473 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.008489 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.008509 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.008522 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.014966 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.035043 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.058361 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.070472 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.089576 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.105451 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.111944 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.111982 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.111991 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.112008 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.112017 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.126501 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.143712 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.157732 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.170123 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.181934 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.201806 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.214657 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.214724 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.214734 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.214754 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.214766 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.220151 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.241254 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:32Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.317684 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.318129 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.318140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.318155 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.318167 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.421094 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.421127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.421138 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.421156 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.421168 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.525783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.525920 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.526002 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.526648 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.526718 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.629533 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.629600 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.629614 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.629647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.629663 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.732286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.732345 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.732355 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.732372 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.732384 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.782418 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:32 crc kubenswrapper[4924]: E1211 13:53:32.782767 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.835320 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.835390 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.835403 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.835426 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.835437 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.938649 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.938712 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.938732 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.938753 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.938769 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:32Z","lastTransitionTime":"2025-12-11T13:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:32 crc kubenswrapper[4924]: I1211 13:53:32.994963 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" event={"ID":"3829d010-f239-43e9-9775-6dc41c5e83c6","Type":"ContainerStarted","Data":"ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.000003 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.000307 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.000341 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.011787 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.023966 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.025416 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.026538 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.041008 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.041036 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.041061 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.041077 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.041090 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.046452 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.059793 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.075882 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.090894 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.104414 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.115562 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.129537 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.144014 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.144051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.144062 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.144079 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.144091 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.144896 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.157692 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.168517 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.182684 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.193654 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.207095 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.219514 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.230867 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.240352 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.247551 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.247601 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.247615 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.247633 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.247652 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.256349 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.265563 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.275842 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.285309 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.304789 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.317374 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.330141 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.343923 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.349527 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.349570 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.349582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.349598 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.349620 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.353901 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.354062 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:53:41.35404355 +0000 UTC m=+34.863524527 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.355782 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.367163 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:33Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.451906 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.451975 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.451998 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.452026 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.452046 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.454478 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.454537 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.454562 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.454597 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454708 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454732 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454744 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454778 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454845 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454872 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454794 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:41.45478119 +0000 UTC m=+34.964262167 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.454972 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.455000 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:41.454948995 +0000 UTC m=+34.964429972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.455028 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:41.455018037 +0000 UTC m=+34.964499014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.455130 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.455230 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:41.455199493 +0000 UTC m=+34.964680510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.554780 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.554825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.554835 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.554851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.554862 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.657542 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.657594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.657607 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.657629 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.657644 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.760450 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.760516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.760549 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.760595 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.760619 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.783142 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.783155 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.783417 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:33 crc kubenswrapper[4924]: E1211 13:53:33.783605 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.862838 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.862896 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.862915 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.862933 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.862944 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.966220 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.966263 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.966274 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.966293 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:33 crc kubenswrapper[4924]: I1211 13:53:33.966303 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:33Z","lastTransitionTime":"2025-12-11T13:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.003807 4924 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.068277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.068357 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.068374 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.068400 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.068422 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.171216 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.171267 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.171278 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.171293 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.171303 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.273743 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.273783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.273794 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.273810 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.273823 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.375714 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.375770 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.375782 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.375806 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.375823 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.478199 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.478242 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.478252 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.478268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.478278 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.580292 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.580370 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.580385 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.580404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.580415 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.683053 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.683150 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.683172 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.683192 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.683206 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.782615 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:34 crc kubenswrapper[4924]: E1211 13:53:34.782879 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.785123 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.785167 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.785191 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.785206 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.785216 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.887532 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.887615 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.887635 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.887662 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.887681 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.989694 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.989749 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.989758 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.989772 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:34 crc kubenswrapper[4924]: I1211 13:53:34.989781 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:34Z","lastTransitionTime":"2025-12-11T13:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.009510 4924 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.093023 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.093075 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.093084 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.093128 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.093140 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.195321 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.195468 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.195488 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.195510 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.195558 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.298466 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.298512 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.298521 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.298535 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.298545 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.401185 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.402000 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.402013 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.402031 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.402042 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.504839 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.504881 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.504892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.504909 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.504928 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.607505 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.607545 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.607557 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.607573 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.607582 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.709941 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.710003 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.710020 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.710043 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.710060 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.782172 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.782201 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:35 crc kubenswrapper[4924]: E1211 13:53:35.782393 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:35 crc kubenswrapper[4924]: E1211 13:53:35.782405 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.812066 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.812099 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.812106 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.812119 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.812128 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.914800 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.914835 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.914842 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.914855 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:35 crc kubenswrapper[4924]: I1211 13:53:35.914864 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:35Z","lastTransitionTime":"2025-12-11T13:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.016239 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/0.log" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.016955 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.017031 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.017063 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.017096 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.017121 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.020965 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee" exitCode=1 Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.021009 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.021743 4924 scope.go:117] "RemoveContainer" containerID="9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.038511 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.055760 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.074960 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.089286 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.102902 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.115020 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.120309 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.120354 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.120362 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.120380 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.120389 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.128521 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.141487 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.153694 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.165676 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.178347 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.188906 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.203837 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.216984 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.222668 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.222703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.222712 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.222725 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.222735 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.326380 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.326438 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.326461 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.326480 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.326493 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.429353 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.429388 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.429396 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.429413 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.429422 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.531291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.531390 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.531404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.531422 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.531459 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.634039 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.634063 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.634071 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.634083 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.634091 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.736641 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.736693 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.736702 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.736716 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.736724 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.782008 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:36 crc kubenswrapper[4924]: E1211 13:53:36.782155 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.796517 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.811787 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.824442 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.839044 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.839099 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.839111 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.839127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.839137 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.843084 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.855631 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.869263 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.882743 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.894057 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.904241 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.915723 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.928814 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.938262 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.942316 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.942364 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.942374 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.942386 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.942395 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:36Z","lastTransitionTime":"2025-12-11T13:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.949416 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:36 crc kubenswrapper[4924]: I1211 13:53:36.958555 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.029417 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/0.log" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.033482 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.033633 4924 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.045002 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.045052 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.045064 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.045083 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.045096 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.065491 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.090037 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.105504 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.119636 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.131101 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.143645 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.148003 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.148053 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.148064 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.148084 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.148099 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.156240 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.166517 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.180818 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.190911 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.202253 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.212416 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.230197 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.242663 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.250397 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.250446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.250457 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.250476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.250531 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.352752 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.352811 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.352827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.352843 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.352853 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.456205 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.456266 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.456288 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.456318 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.456389 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.558627 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.558672 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.558681 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.558696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.558713 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.660653 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.660687 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.660697 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.660712 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.660722 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.763240 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.763286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.763297 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.763311 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.763320 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.782757 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.782757 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:37 crc kubenswrapper[4924]: E1211 13:53:37.782906 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:37 crc kubenswrapper[4924]: E1211 13:53:37.782986 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.865414 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.865464 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.865476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.865499 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.865515 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.968797 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.968870 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.968888 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.968909 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:37 crc kubenswrapper[4924]: I1211 13:53:37.968925 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:37Z","lastTransitionTime":"2025-12-11T13:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.039190 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/1.log" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.039885 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/0.log" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.042684 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3" exitCode=1 Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.042721 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.042786 4924 scope.go:117] "RemoveContainer" containerID="9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.043770 4924 scope.go:117] "RemoveContainer" containerID="6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3" Dec 11 13:53:38 crc kubenswrapper[4924]: E1211 13:53:38.043977 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.069440 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.070814 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.070858 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.070874 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.070894 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.070909 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.084965 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.098968 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.107550 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.120907 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.130455 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.147054 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.160369 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.173932 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.173970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.173982 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.174000 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.174014 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.184179 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.196727 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.210997 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.224702 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.235320 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.244952 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.275888 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.275917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.275926 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.275938 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.275946 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.378963 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.378997 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.379008 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.379025 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.379037 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.482358 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.482396 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.482405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.482419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.482428 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.585107 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.585161 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.585177 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.585198 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.585217 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.687725 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.687765 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.687778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.687794 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.687805 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.754733 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp"] Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.755561 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.759139 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.760256 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.765290 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1ac75b-7e02-4289-a207-c105e63a2fdc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.765347 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1ac75b-7e02-4289-a207-c105e63a2fdc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.765434 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1ac75b-7e02-4289-a207-c105e63a2fdc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.765459 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2cg\" (UniqueName: \"kubernetes.io/projected/4b1ac75b-7e02-4289-a207-c105e63a2fdc-kube-api-access-7s2cg\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.776507 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.782795 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:38 crc kubenswrapper[4924]: E1211 13:53:38.782959 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.788189 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.790063 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.790107 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.790124 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.790143 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.790159 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.806739 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.819782 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.833015 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.854920 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.866924 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1ac75b-7e02-4289-a207-c105e63a2fdc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.867406 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1ac75b-7e02-4289-a207-c105e63a2fdc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.867590 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1ac75b-7e02-4289-a207-c105e63a2fdc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.867692 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2cg\" (UniqueName: \"kubernetes.io/projected/4b1ac75b-7e02-4289-a207-c105e63a2fdc-kube-api-access-7s2cg\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.869423 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b1ac75b-7e02-4289-a207-c105e63a2fdc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.869551 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b1ac75b-7e02-4289-a207-c105e63a2fdc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.871871 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.877947 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b1ac75b-7e02-4289-a207-c105e63a2fdc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.885953 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2cg\" (UniqueName: \"kubernetes.io/projected/4b1ac75b-7e02-4289-a207-c105e63a2fdc-kube-api-access-7s2cg\") pod \"ovnkube-control-plane-749d76644c-7v2pp\" (UID: \"4b1ac75b-7e02-4289-a207-c105e63a2fdc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.887282 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.892144 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.892191 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.892204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.892226 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.892245 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.901026 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.912316 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.923540 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.937055 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.950413 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.964388 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.978193 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:38Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.995487 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.995567 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.995593 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.995625 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:38 crc kubenswrapper[4924]: I1211 13:53:38.995644 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:38Z","lastTransitionTime":"2025-12-11T13:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.050113 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/1.log" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.073834 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" Dec 11 13:53:39 crc kubenswrapper[4924]: W1211 13:53:39.089806 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b1ac75b_7e02_4289_a207_c105e63a2fdc.slice/crio-7a8e7fc552aa036bdc356f36233955973d9c5c8bf3312510dd8b2c4ea87a8bd8 WatchSource:0}: Error finding container 7a8e7fc552aa036bdc356f36233955973d9c5c8bf3312510dd8b2c4ea87a8bd8: Status 404 returned error can't find the container with id 7a8e7fc552aa036bdc356f36233955973d9c5c8bf3312510dd8b2c4ea87a8bd8 Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.097498 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.097564 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.097583 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.097609 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.097628 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.199884 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.199935 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.199950 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.199972 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.199989 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.303478 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.303531 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.303539 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.303558 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.303627 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.406795 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.406997 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.407023 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.407048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.407076 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.509412 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.509480 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.509490 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.509507 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.509519 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.611790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.611846 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.611858 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.611874 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.611883 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.714236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.714272 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.714282 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.714298 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.714309 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.782046 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.782092 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:39 crc kubenswrapper[4924]: E1211 13:53:39.782174 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:39 crc kubenswrapper[4924]: E1211 13:53:39.782238 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.816703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.816739 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.816747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.816762 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.816770 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.842704 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-79mv2"] Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.843175 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:39 crc kubenswrapper[4924]: E1211 13:53:39.843242 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.857662 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.871899 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.877374 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.877730 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ws7\" (UniqueName: \"kubernetes.io/projected/39f08493-e794-4e97-bc69-8faa67a120b8-kube-api-access-x9ws7\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.883853 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.895653 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.908979 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.919641 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.920158 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.920184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.920196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.920210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.920218 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:39Z","lastTransitionTime":"2025-12-11T13:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.933164 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.943177 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.954460 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.965161 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.978562 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.978621 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ws7\" (UniqueName: \"kubernetes.io/projected/39f08493-e794-4e97-bc69-8faa67a120b8-kube-api-access-x9ws7\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:39 crc kubenswrapper[4924]: E1211 13:53:39.978884 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:39 crc kubenswrapper[4924]: E1211 13:53:39.978923 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:40.478909618 +0000 UTC m=+33.988390595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:39 crc kubenswrapper[4924]: I1211 13:53:39.988139 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:39Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.000066 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ws7\" (UniqueName: \"kubernetes.io/projected/39f08493-e794-4e97-bc69-8faa67a120b8-kube-api-access-x9ws7\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.004620 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.019892 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.022991 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.023017 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.023026 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.023061 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.023076 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.031362 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.045150 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.059214 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.060770 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" event={"ID":"4b1ac75b-7e02-4289-a207-c105e63a2fdc","Type":"ContainerStarted","Data":"b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.060811 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" event={"ID":"4b1ac75b-7e02-4289-a207-c105e63a2fdc","Type":"ContainerStarted","Data":"7a8e7fc552aa036bdc356f36233955973d9c5c8bf3312510dd8b2c4ea87a8bd8"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.124966 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.125004 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.125013 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.125031 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.125040 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.184916 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.184949 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.184960 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.184975 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.184985 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.195771 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.198948 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.198983 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.198991 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.199005 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.199014 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.209443 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.212288 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.212318 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.212339 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.212354 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.212363 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.223414 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.227882 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.227918 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.227928 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.227947 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.227957 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.240140 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.242995 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.243024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.243032 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.243045 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.243054 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.253651 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.253900 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.255242 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.255278 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.255287 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.255301 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.255318 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.329653 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.355991 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.357706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.357769 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.357791 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.357819 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.357841 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.372943 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.389638 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.399813 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.413260 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.427831 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.442804 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.454253 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.460524 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.460572 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.460582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.460601 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.460612 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.464967 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.473434 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.482090 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.483986 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.484153 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.484218 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:41.484201076 +0000 UTC m=+34.993682053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.493606 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.502833 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.522173 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.532271 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.542605 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:40Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.563115 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.563146 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.563156 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.563171 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.563180 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.664891 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.664942 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.664959 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.664983 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.664999 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.767533 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.767583 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.767594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.767610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.767619 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.782364 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:40 crc kubenswrapper[4924]: E1211 13:53:40.782523 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.870073 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.870115 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.870127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.870141 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.870150 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.972754 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.972790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.972803 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.972818 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:40 crc kubenswrapper[4924]: I1211 13:53:40.972829 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:40Z","lastTransitionTime":"2025-12-11T13:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.066531 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" event={"ID":"4b1ac75b-7e02-4289-a207-c105e63a2fdc","Type":"ContainerStarted","Data":"ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.075769 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.075841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.075864 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.075891 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.075916 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.178487 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.178532 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.178549 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.178568 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.178579 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.280747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.280781 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.280791 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.280804 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.280814 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.383746 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.383800 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.383817 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.383841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.383858 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.392530 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.392737 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:53:57.392720133 +0000 UTC m=+50.902201110 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.486439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.486494 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.486510 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.486533 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.486548 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.493979 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.494039 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.494096 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.494117 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.494136 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494545 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494669 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:43.494651569 +0000 UTC m=+37.004132546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494820 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494860 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:57.494852895 +0000 UTC m=+51.004333872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494882 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494894 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494906 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494932 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:57.494925737 +0000 UTC m=+51.004406714 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.494980 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.495004 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:57.49499437 +0000 UTC m=+51.004475347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.495020 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.495031 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.495042 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.495065 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:57.495056821 +0000 UTC m=+51.004537798 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.589108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.589150 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.589160 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.589175 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.589184 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.692095 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.692165 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.692195 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.692227 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.692251 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.782710 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.782777 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.782785 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.782984 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.783141 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:41 crc kubenswrapper[4924]: E1211 13:53:41.783290 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.795854 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.795922 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.795945 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.795976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.796000 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.898073 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.898121 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.898133 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.898153 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:41 crc kubenswrapper[4924]: I1211 13:53:41.898169 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:41Z","lastTransitionTime":"2025-12-11T13:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.000629 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.000683 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.000696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.000716 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.000731 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.083787 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.095785 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.102705 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.102753 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.102767 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.102785 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.102796 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.106757 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.120191 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.130916 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.143606 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.153916 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.164314 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.179645 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.191311 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.202546 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.205071 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.205104 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.205113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.205142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.205192 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.212788 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.223326 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.233103 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.244870 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.255751 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.308726 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.308801 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.308826 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.308857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.308882 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.411848 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.411892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.411904 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.411920 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.411931 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.514260 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.514308 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.514317 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.514346 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.514358 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.618203 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.618271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.618287 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.618310 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.618346 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.720674 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.720734 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.720747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.720765 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.720776 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.782293 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:42 crc kubenswrapper[4924]: E1211 13:53:42.782458 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.822575 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.822620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.822631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.822647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.822658 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.925553 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.925642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.925666 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.925696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:42 crc kubenswrapper[4924]: I1211 13:53:42.925726 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:42Z","lastTransitionTime":"2025-12-11T13:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.028130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.028167 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.028177 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.028191 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.028200 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.131025 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.131071 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.131082 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.131099 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.131109 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.233085 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.233127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.233137 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.233155 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.233165 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.335874 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.335930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.335942 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.335958 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.335967 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.438512 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.438556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.438570 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.438588 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.438599 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.515563 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:43 crc kubenswrapper[4924]: E1211 13:53:43.515883 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:43 crc kubenswrapper[4924]: E1211 13:53:43.515991 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:47.515968077 +0000 UTC m=+41.025449064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.540161 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.540199 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.540208 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.540224 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.540235 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.642292 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.642359 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.642368 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.642381 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.642390 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.745070 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.745127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.745149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.745168 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.745181 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.782510 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:43 crc kubenswrapper[4924]: E1211 13:53:43.782660 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.782540 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:43 crc kubenswrapper[4924]: E1211 13:53:43.782738 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.782526 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:43 crc kubenswrapper[4924]: E1211 13:53:43.782800 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.848019 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.848055 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.848067 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.848083 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.848093 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.951138 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.951189 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.951202 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.951222 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:43 crc kubenswrapper[4924]: I1211 13:53:43.951235 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:43Z","lastTransitionTime":"2025-12-11T13:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.054451 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.054528 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.054547 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.054573 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.054594 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.157116 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.157182 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.157194 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.157212 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.157221 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.260373 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.260409 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.260419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.260434 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.260444 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.363048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.363091 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.363108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.363125 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.363135 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.465577 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.465642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.465658 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.465680 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.465697 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.568730 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.568784 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.568795 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.568810 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.568820 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.672127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.672184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.672194 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.672211 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.672222 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.775413 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.775453 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.775465 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.775482 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.775494 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.782248 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:44 crc kubenswrapper[4924]: E1211 13:53:44.782486 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.878746 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.878811 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.878827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.878857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.878876 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.982578 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.982652 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.982673 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.982699 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:44 crc kubenswrapper[4924]: I1211 13:53:44.982718 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:44Z","lastTransitionTime":"2025-12-11T13:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.086425 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.086505 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.086531 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.086561 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.086583 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.190014 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.190088 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.190106 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.190133 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.190150 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.292773 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.292827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.292839 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.292860 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.292872 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.394675 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.394703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.394710 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.394723 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.394731 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.498532 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.498582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.498594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.498608 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.498617 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.601723 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.601755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.601769 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.601783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.601795 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.704485 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.704535 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.704546 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.704562 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.704574 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.783073 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:45 crc kubenswrapper[4924]: E1211 13:53:45.783314 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.783103 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:45 crc kubenswrapper[4924]: E1211 13:53:45.783473 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.783084 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:45 crc kubenswrapper[4924]: E1211 13:53:45.783637 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.806718 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.806757 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.806765 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.806777 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.806787 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.909914 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.909955 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.909968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.909985 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:45 crc kubenswrapper[4924]: I1211 13:53:45.910000 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:45Z","lastTransitionTime":"2025-12-11T13:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.013429 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.013484 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.013499 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.013517 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.013531 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.116308 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.116415 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.116442 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.116472 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.116496 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.219364 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.219404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.219415 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.219433 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.219444 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.321642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.321711 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.321731 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.321757 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.321776 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.423735 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.423765 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.423773 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.423786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.423796 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.526286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.526315 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.526336 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.526348 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.526357 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.629177 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.629241 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.629262 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.629292 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.629317 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.732509 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.732568 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.732589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.732616 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.732630 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.782636 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:46 crc kubenswrapper[4924]: E1211 13:53:46.782821 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.796409 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.817938 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.831152 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.835197 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.835271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.835292 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.835315 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.835357 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.842954 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.854704 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.871133 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.885720 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.898766 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.915251 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.927309 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.937137 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.937178 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.937190 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.937208 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.937219 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:46Z","lastTransitionTime":"2025-12-11T13:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.949924 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.970970 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:46 crc kubenswrapper[4924]: I1211 13:53:46.985588 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.000301 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:46Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.017522 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9af201aa43892c29fc94600b27f577c5940edc03ba56df992c0fcc2041fc7aee\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:35Z\\\",\\\"message\\\":\\\"val\\\\nI1211 13:53:34.609898 6230 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 13:53:34.609965 6230 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:53:34.609973 6230 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 13:53:34.609983 6230 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 13:53:34.609976 6230 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 13:53:34.610002 6230 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1211 13:53:34.610008 6230 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1211 13:53:34.610014 6230 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1211 13:53:34.610026 6230 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:53:34.610034 6230 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:53:34.610040 6230 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1211 13:53:34.610041 6230 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 13:53:34.610051 6230 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 13:53:34.610052 6230 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 13:53:34.610057 6230 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1211 13:53:34.610086 6230 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:47Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.030032 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:47Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.039774 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.039828 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.039849 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.039873 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.039891 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.141949 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.141995 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.142007 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.142024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.142037 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.245679 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.245748 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.245766 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.245793 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.245812 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.349310 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.349445 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.349464 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.349500 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.349522 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.452665 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.452703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.452715 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.452733 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.452744 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.555582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.555632 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.555644 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.555660 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.555671 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.558971 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:47 crc kubenswrapper[4924]: E1211 13:53:47.559089 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:47 crc kubenswrapper[4924]: E1211 13:53:47.559147 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:53:55.55913076 +0000 UTC m=+49.068611737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.658412 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.658462 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.658471 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.658484 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.658495 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.760580 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.760625 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.760637 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.760655 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.760667 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.782263 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.782296 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.782396 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:47 crc kubenswrapper[4924]: E1211 13:53:47.782485 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:47 crc kubenswrapper[4924]: E1211 13:53:47.782671 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:47 crc kubenswrapper[4924]: E1211 13:53:47.782842 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.865035 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.865094 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.865113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.865139 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.865156 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.967789 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.967831 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.967841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.967856 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:47 crc kubenswrapper[4924]: I1211 13:53:47.967867 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:47Z","lastTransitionTime":"2025-12-11T13:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.070866 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.070906 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.070917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.070934 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.070945 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.173427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.173470 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.173483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.173500 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.173512 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.276094 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.276168 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.276189 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.276218 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.276241 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.379087 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.379122 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.379132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.379147 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.379159 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.481338 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.481387 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.481404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.481422 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.481432 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.584156 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.584200 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.584210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.584224 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.584236 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.685999 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.686042 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.686059 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.686074 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.686085 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.782097 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:48 crc kubenswrapper[4924]: E1211 13:53:48.782212 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.787815 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.787847 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.787857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.787871 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.787882 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.890053 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.890113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.890134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.890162 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.890183 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.992786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.992848 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.992864 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.992886 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:48 crc kubenswrapper[4924]: I1211 13:53:48.992904 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:48Z","lastTransitionTime":"2025-12-11T13:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.095108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.095145 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.095157 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.095179 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.095200 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.198996 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.199042 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.199093 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.199113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.199126 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.302081 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.302132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.302147 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.302166 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.302181 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.408794 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.408850 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.408860 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.408880 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.408892 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.511491 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.511556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.511582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.511609 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.511625 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.614741 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.614805 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.614830 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.614858 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.614878 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.717635 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.717692 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.717704 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.717721 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.717733 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.782445 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.782509 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.782445 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:49 crc kubenswrapper[4924]: E1211 13:53:49.782723 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:49 crc kubenswrapper[4924]: E1211 13:53:49.782896 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:49 crc kubenswrapper[4924]: E1211 13:53:49.782987 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.820316 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.820423 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.820446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.820475 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.820496 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.924031 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.924107 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.924134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.924166 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:49 crc kubenswrapper[4924]: I1211 13:53:49.924189 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:49Z","lastTransitionTime":"2025-12-11T13:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.027779 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.027825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.027837 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.027855 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.027868 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.130476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.130536 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.130554 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.130578 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.130595 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.233130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.233172 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.233196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.233215 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.233226 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.273803 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.274533 4924 scope.go:117] "RemoveContainer" containerID="6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.293644 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.304950 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.337744 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.339427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.339462 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.339473 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.339487 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.339498 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.340515 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.340543 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.340554 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.340565 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.340574 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.354990 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.363768 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.371269 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.371307 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.371319 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.371371 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.371384 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.374829 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.388034 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.388961 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.392206 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.392237 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.392246 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.392259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.392269 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.398780 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.403571 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.407142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.407174 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.407182 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.407196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.407205 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.410664 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.418865 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.423482 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.423506 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.423517 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.423534 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.423548 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.423559 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.436572 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.438248 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.438392 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.441444 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.441468 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.441478 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.441494 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.441505 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.453833 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.463642 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.474456 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.493629 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.504590 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.514535 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.543212 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.543239 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.543247 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.543259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.543268 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.645179 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.645207 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.645215 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.645228 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.645236 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.747387 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.747420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.747431 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.747445 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.747456 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.782539 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:50 crc kubenswrapper[4924]: E1211 13:53:50.782657 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.850123 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.850162 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.850179 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.850196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.850207 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.952090 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.952132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.952143 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.952161 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:50 crc kubenswrapper[4924]: I1211 13:53:50.952172 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:50Z","lastTransitionTime":"2025-12-11T13:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.054443 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.054469 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.054476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.054489 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.054498 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.100605 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/1.log" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.103628 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.104514 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.118723 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.131132 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.144445 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.157419 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.157451 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.157568 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.157583 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.157604 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.157617 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.171856 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.197246 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.213162 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.227729 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.240994 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.253382 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.260195 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.260225 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.260244 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.260259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.260268 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.268416 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.280036 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.292392 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.306015 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.318401 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.331512 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.363570 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.363615 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.363626 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.363669 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.363680 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.466152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.466206 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.466219 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.466238 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.466252 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.569679 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.569754 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.569774 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.569801 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.569819 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.672508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.672556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.672569 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.672584 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.672594 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.775676 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.775750 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.775784 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.775820 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.775839 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.782501 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:51 crc kubenswrapper[4924]: E1211 13:53:51.782702 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.782506 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.782504 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:51 crc kubenswrapper[4924]: E1211 13:53:51.782862 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:51 crc kubenswrapper[4924]: E1211 13:53:51.782939 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.878222 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.878274 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.878287 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.878304 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.878315 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.981178 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.981213 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.981223 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.981240 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:51 crc kubenswrapper[4924]: I1211 13:53:51.981251 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:51Z","lastTransitionTime":"2025-12-11T13:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.084041 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.084103 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.084123 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.084146 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.084164 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.108279 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/2.log" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.109239 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/1.log" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.112025 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c" exitCode=1 Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.112074 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.112123 4924 scope.go:117] "RemoveContainer" containerID="6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.112731 4924 scope.go:117] "RemoveContainer" containerID="5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c" Dec 11 13:53:52 crc kubenswrapper[4924]: E1211 13:53:52.112907 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.133410 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.148583 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.163049 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.178247 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.185868 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.185907 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.185920 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.185935 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.185945 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.187726 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.197239 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.208972 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.221419 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.234224 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.245864 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.263057 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d408799b3769bf042e68ee88a29267554b7b767863794acbca462b6a54ebca3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:37Z\\\",\\\"message\\\":\\\"s:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.58],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:37.103171 6355 lb_config.go:1031] Cluster endpoints for openshift-controller-manager-operator/metrics for network=default are: map[]\\\\nI1211 13:53:37.103189 6355 services_controller.go:443] Built service openshift-controller-manager-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.58\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nF1211 13:53:37.103198 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: failed to add event handler: handler {0x1e60340 0x1e60020 0x1e5ffc0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.275752 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.287808 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.288019 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.288038 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.288048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.288064 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.288075 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.299966 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.315782 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.332897 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:52Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.389969 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.390003 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.390011 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.390023 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.390032 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.493102 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.493158 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.493170 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.493195 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.493220 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.595844 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.595875 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.595884 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.595899 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.595908 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.697541 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.697575 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.697583 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.697595 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.697604 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.782616 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:52 crc kubenswrapper[4924]: E1211 13:53:52.782838 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.809163 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.809196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.809205 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.809217 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.809225 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.912013 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.912045 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.912055 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.912069 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:52 crc kubenswrapper[4924]: I1211 13:53:52.912090 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:52Z","lastTransitionTime":"2025-12-11T13:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.015762 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.015821 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.015840 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.015862 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.015879 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.117054 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/2.log" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.117795 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.117851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.117863 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.117875 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.117883 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.120919 4924 scope.go:117] "RemoveContainer" containerID="5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c" Dec 11 13:53:53 crc kubenswrapper[4924]: E1211 13:53:53.121077 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.136224 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.152037 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.167126 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.189034 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.207663 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.219896 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.219954 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.219972 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.219997 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.220014 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.226601 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.241950 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.254135 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.265588 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.275937 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.287301 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.302885 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.319345 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.322048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.322079 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.322090 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.322104 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.322116 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.329403 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.339512 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.349833 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:53Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.424738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.424779 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.424790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.424808 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.424820 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.526892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.526950 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.526962 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.526977 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.526988 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.629541 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.629589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.629635 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.629656 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.629668 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.732463 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.732552 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.732571 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.732597 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.732613 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.783027 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:53 crc kubenswrapper[4924]: E1211 13:53:53.783248 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.783022 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.783035 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:53 crc kubenswrapper[4924]: E1211 13:53:53.783502 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:53 crc kubenswrapper[4924]: E1211 13:53:53.783586 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.836135 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.836393 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.836449 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.836478 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.836496 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.940011 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.940105 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.940122 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.940141 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:53 crc kubenswrapper[4924]: I1211 13:53:53.940160 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:53Z","lastTransitionTime":"2025-12-11T13:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.042620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.042678 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.042691 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.042722 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.042745 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.145047 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.145099 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.145113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.145136 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.145151 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.248048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.248102 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.248111 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.248125 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.248134 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.350626 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.350670 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.350681 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.350695 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.350706 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.452532 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.452563 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.452571 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.452585 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.452597 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.555490 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.555740 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.555756 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.555773 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.555787 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.658678 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.658738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.658757 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.658784 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.658801 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.762009 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.762068 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.762080 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.762096 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.762108 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.782475 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:54 crc kubenswrapper[4924]: E1211 13:53:54.782674 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.865831 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.865885 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.865897 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.865913 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.865925 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.967807 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.967897 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.967920 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.967953 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:54 crc kubenswrapper[4924]: I1211 13:53:54.967972 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:54Z","lastTransitionTime":"2025-12-11T13:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.071206 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.071317 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.071374 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.071393 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.071440 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.173868 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.173928 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.173942 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.173967 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.173979 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.276145 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.276220 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.276239 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.276263 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.276280 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.378527 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.378567 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.378589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.378610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.378623 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.481609 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.481707 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.481724 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.481747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.481760 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.583684 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.583751 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.583768 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.583783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.583795 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.642573 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:55 crc kubenswrapper[4924]: E1211 13:53:55.642734 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:55 crc kubenswrapper[4924]: E1211 13:53:55.642793 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:54:11.642778655 +0000 UTC m=+65.152259632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.686907 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.686957 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.686967 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.686984 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.686996 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.782083 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.782189 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:55 crc kubenswrapper[4924]: E1211 13:53:55.782221 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.782356 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:55 crc kubenswrapper[4924]: E1211 13:53:55.782414 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:55 crc kubenswrapper[4924]: E1211 13:53:55.782524 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.789316 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.789405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.789421 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.789447 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.789464 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.892659 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.892713 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.892737 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.892760 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.892776 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.994757 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.994790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.994800 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.994815 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:55 crc kubenswrapper[4924]: I1211 13:53:55.994826 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:55Z","lastTransitionTime":"2025-12-11T13:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.097541 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.097617 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.097636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.097671 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.097710 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.201931 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.201974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.201982 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.201995 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.202007 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.305167 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.305221 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.305241 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.305265 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.305281 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.412455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.412493 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.412500 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.412517 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.412526 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.514685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.515131 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.515143 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.515162 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.515174 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.617372 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.617444 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.617462 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.617493 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.617511 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.721027 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.721097 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.721109 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.721130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.721143 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.782770 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:56 crc kubenswrapper[4924]: E1211 13:53:56.782970 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.798855 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.813005 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.823867 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.823912 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.823922 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.823936 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.823946 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.823920 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.838435 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.852476 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.867963 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.877225 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.888243 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.907379 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.919077 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.926957 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.927003 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.927018 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.927035 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.927047 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:56Z","lastTransitionTime":"2025-12-11T13:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.930750 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.943517 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.957198 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.972457 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.986016 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:56 crc kubenswrapper[4924]: I1211 13:53:56.998470 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:56Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.029214 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.029262 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.029280 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.029305 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.029351 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.133664 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.134140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.134157 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.134171 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.134180 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.236721 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.236761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.236771 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.236786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.236802 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.340450 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.340529 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.340541 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.340559 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.340570 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.442862 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.442903 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.442915 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.442929 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.442940 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.461707 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.461834 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:54:29.461808828 +0000 UTC m=+82.971289825 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.547000 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.547040 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.547051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.547066 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.547077 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.563180 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.563249 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.563280 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.563317 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563412 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563487 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563510 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563539 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563553 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563499 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:54:29.563482237 +0000 UTC m=+83.072963214 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563609 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:54:29.56358922 +0000 UTC m=+83.073070247 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563492 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563698 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563624 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:54:29.563616101 +0000 UTC m=+83.073097138 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563727 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.563813 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:54:29.563791406 +0000 UTC m=+83.073272393 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.650377 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.650421 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.650431 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.650446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.650456 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.753021 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.753077 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.753095 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.753118 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.753135 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.782471 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.782491 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.782513 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.782562 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.782712 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:57 crc kubenswrapper[4924]: E1211 13:53:57.782810 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.839056 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.850162 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.855761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.855842 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.855859 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.855921 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.855947 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.859853 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.880126 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.891623 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.903516 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.913347 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.922686 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.933743 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.945284 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.956071 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.958669 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.958706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.958716 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.958733 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.958743 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:57Z","lastTransitionTime":"2025-12-11T13:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.967924 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.978890 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:57 crc kubenswrapper[4924]: I1211 13:53:57.992699 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:57Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.006278 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:58Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.018545 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:58Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.040961 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:58Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.052958 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:53:58Z is after 2025-08-24T17:21:41Z" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.060817 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.060844 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.060853 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.060866 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.060875 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.163042 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.163078 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.163089 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.163104 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.163113 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.264792 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.264834 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.264845 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.264860 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.264879 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.367610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.367665 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.367677 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.367696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.367708 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.470319 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.470403 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.470419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.470443 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.470461 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.572636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.572691 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.572707 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.572728 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.572745 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.677693 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.677780 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.677816 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.677836 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.677848 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.779978 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.780011 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.780039 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.780051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.780060 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.782407 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:53:58 crc kubenswrapper[4924]: E1211 13:53:58.782490 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.882223 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.882278 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.882294 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.882318 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.882364 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.984955 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.985409 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.985592 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.985750 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:58 crc kubenswrapper[4924]: I1211 13:53:58.985876 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:58Z","lastTransitionTime":"2025-12-11T13:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.088300 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.088404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.088426 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.088445 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.088456 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.190770 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.190818 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.190832 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.190851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.190867 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.293656 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.293716 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.293739 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.293758 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.293772 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.396738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.396884 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.396984 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.397032 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.397057 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.500065 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.500115 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.500128 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.500147 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.500159 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.603337 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.603398 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.603415 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.603439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.603463 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.706307 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.706377 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.706387 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.706402 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.706412 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.782811 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.782847 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.782934 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:53:59 crc kubenswrapper[4924]: E1211 13:53:59.782983 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:53:59 crc kubenswrapper[4924]: E1211 13:53:59.783077 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:53:59 crc kubenswrapper[4924]: E1211 13:53:59.783162 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.809730 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.809788 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.809796 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.809810 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.809819 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.912617 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.912691 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.912714 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.912744 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:53:59 crc kubenswrapper[4924]: I1211 13:53:59.912767 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:53:59Z","lastTransitionTime":"2025-12-11T13:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.015079 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.015154 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.015167 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.015195 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.015209 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.118124 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.118158 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.118168 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.118181 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.118190 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.439147 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.439199 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.439207 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.439221 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.439229 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.542486 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.542554 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.542576 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.542605 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.542626 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.644970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.645022 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.645035 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.645056 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.645070 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.747887 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.747928 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.747940 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.747958 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.747968 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.783099 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.783234 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.828914 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.828958 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.828970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.828988 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.829000 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.842616 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:00Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.846135 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.846178 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.846190 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.846207 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.846218 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.857400 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:00Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.861678 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.861738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.861761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.861792 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.861814 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.878493 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:00Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.882209 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.882246 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.882259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.882277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.882288 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.895388 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:00Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.899214 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.899256 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.899274 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.899294 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.899305 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.914483 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:00Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:00 crc kubenswrapper[4924]: E1211 13:54:00.914593 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.915859 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.915899 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.915910 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.915929 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:00 crc kubenswrapper[4924]: I1211 13:54:00.915942 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:00Z","lastTransitionTime":"2025-12-11T13:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.018507 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.018547 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.018555 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.018592 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.018604 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.120560 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.120594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.120603 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.120620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.120628 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.223286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.223338 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.223375 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.223397 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.223411 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.326636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.326709 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.326723 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.326748 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.326768 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.429056 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.429088 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.429100 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.429116 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.429128 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.532556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.532594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.532606 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.532624 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.532642 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.635768 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.635826 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.635847 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.635875 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.635895 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.738914 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.738976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.738993 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.739016 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.739032 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.782140 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.782167 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.782173 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:01 crc kubenswrapper[4924]: E1211 13:54:01.782310 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:01 crc kubenswrapper[4924]: E1211 13:54:01.782521 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:01 crc kubenswrapper[4924]: E1211 13:54:01.782657 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.841873 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.841974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.841987 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.842021 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.842049 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.944211 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.944269 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.944290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.944312 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:01 crc kubenswrapper[4924]: I1211 13:54:01.944360 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:01Z","lastTransitionTime":"2025-12-11T13:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.046631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.046685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.046700 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.046742 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.046783 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.150176 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.150241 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.150262 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.150291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.150313 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.253930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.253983 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.253999 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.254026 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.254046 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.356931 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.356994 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.357015 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.357045 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.357067 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.459492 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.459554 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.459566 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.459585 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.459599 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.562037 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.562075 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.562085 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.562100 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.562110 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.664444 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.664522 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.664540 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.664560 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.664572 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.767365 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.767421 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.767430 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.767445 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.767456 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.782769 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:02 crc kubenswrapper[4924]: E1211 13:54:02.782905 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.870301 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.870365 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.870378 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.870396 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.870408 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.973377 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.973427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.973442 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.973460 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:02 crc kubenswrapper[4924]: I1211 13:54:02.973473 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:02Z","lastTransitionTime":"2025-12-11T13:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.077051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.077522 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.077776 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.077957 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.078090 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.194515 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.194802 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.194881 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.194960 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.195027 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.298299 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.298395 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.298419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.298446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.298464 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.401321 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.401405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.401420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.401441 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.401457 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.504054 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.504100 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.504108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.504122 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.504131 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.606591 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.606658 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.606674 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.606700 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.606720 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.709860 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.709905 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.709917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.709933 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.709945 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.782607 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.782648 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.782778 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:03 crc kubenswrapper[4924]: E1211 13:54:03.782982 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:03 crc kubenswrapper[4924]: E1211 13:54:03.783107 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:03 crc kubenswrapper[4924]: E1211 13:54:03.783239 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.812550 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.812685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.812719 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.812752 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.812779 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.915464 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.915507 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.915516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.915530 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:03 crc kubenswrapper[4924]: I1211 13:54:03.915539 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:03Z","lastTransitionTime":"2025-12-11T13:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.018221 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.018253 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.018261 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.018273 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.018282 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.120677 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.120755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.120778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.120806 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.120830 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.223656 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.223739 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.223774 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.223808 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.223831 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.326017 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.326087 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.326101 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.326119 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.326130 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.428210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.428253 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.428271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.428287 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.428300 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.530715 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.530744 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.530752 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.530781 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.530792 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.633516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.633580 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.633637 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.633662 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.633680 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.735757 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.735797 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.735807 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.735823 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.735833 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.782499 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:04 crc kubenswrapper[4924]: E1211 13:54:04.782663 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.838710 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.838754 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.838768 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.838783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.838796 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.942087 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.942506 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.942535 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.942559 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:04 crc kubenswrapper[4924]: I1211 13:54:04.942577 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:04Z","lastTransitionTime":"2025-12-11T13:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.045367 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.045408 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.045417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.045431 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.045443 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.148053 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.148098 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.148108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.148123 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.148137 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.250312 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.250588 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.250664 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.250734 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.250803 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.352917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.352974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.352990 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.353012 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.353027 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.455499 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.455751 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.455995 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.456156 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.456231 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.558370 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.558421 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.558432 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.558448 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.558459 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.660488 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.660533 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.660546 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.660563 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.660578 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.763010 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.763259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.763288 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.763392 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.763431 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.782492 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.782539 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:05 crc kubenswrapper[4924]: E1211 13:54:05.782694 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.782713 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:05 crc kubenswrapper[4924]: E1211 13:54:05.783001 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:05 crc kubenswrapper[4924]: E1211 13:54:05.782894 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.867496 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.867553 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.867565 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.867582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.867594 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.970027 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.970070 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.970078 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.970090 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:05 crc kubenswrapper[4924]: I1211 13:54:05.970098 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:05Z","lastTransitionTime":"2025-12-11T13:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.072730 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.073036 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.073180 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.073290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.073430 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.175672 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.175710 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.175722 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.175738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.175754 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.279388 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.279436 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.279450 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.279468 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.279483 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.381929 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.381978 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.381990 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.382011 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.382025 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.484747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.484788 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.484802 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.484819 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.484832 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.587876 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.587919 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.587929 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.587943 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.587954 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.690051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.690113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.690130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.690156 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.690173 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.782044 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:06 crc kubenswrapper[4924]: E1211 13:54:06.782143 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.792110 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.792140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.792149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.792161 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.792171 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.796788 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.808645 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.830600 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.846704 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.861285 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.874525 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.886168 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.894017 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.894041 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.894051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.894067 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.894078 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.898410 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.912569 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.924669 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.935567 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.946396 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.955452 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.964983 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.975609 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.987886 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.995964 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:06Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.996694 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.996728 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.996740 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.996759 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:06 crc kubenswrapper[4924]: I1211 13:54:06.996771 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:06Z","lastTransitionTime":"2025-12-11T13:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.100232 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.100277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.100291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.100308 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.100320 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.202271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.202305 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.202316 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.202396 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.202407 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.304924 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.304994 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.305017 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.305044 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.305066 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.407095 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.407147 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.407163 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.407186 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.407203 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.509500 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.509539 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.509550 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.509565 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.509576 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.612005 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.612059 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.612072 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.612092 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.612105 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.714717 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.714773 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.714790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.714810 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.714826 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.782754 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.782748 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.782893 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:07 crc kubenswrapper[4924]: E1211 13:54:07.783036 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:07 crc kubenswrapper[4924]: E1211 13:54:07.783107 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:07 crc kubenswrapper[4924]: E1211 13:54:07.783192 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.784290 4924 scope.go:117] "RemoveContainer" containerID="5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c" Dec 11 13:54:07 crc kubenswrapper[4924]: E1211 13:54:07.784719 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.817110 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.817153 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.817166 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.817184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.817196 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.919306 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.919400 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.919417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.919443 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:07 crc kubenswrapper[4924]: I1211 13:54:07.919462 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:07Z","lastTransitionTime":"2025-12-11T13:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.022152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.022199 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.022210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.022230 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.022241 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.124439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.124498 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.124510 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.124535 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.124547 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.226880 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.226926 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.226936 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.226952 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.226963 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.330375 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.330433 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.330470 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.330491 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.330506 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.432897 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.432946 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.432954 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.432968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.432977 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.534991 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.535025 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.535034 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.535051 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.535061 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.637236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.637272 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.637285 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.637300 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.637311 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.739984 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.740059 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.740072 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.740091 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.740103 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.782811 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:08 crc kubenswrapper[4924]: E1211 13:54:08.783052 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.842179 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.842240 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.842258 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.842284 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.842301 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.944353 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.944409 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.944425 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.944447 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:08 crc kubenswrapper[4924]: I1211 13:54:08.944462 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:08Z","lastTransitionTime":"2025-12-11T13:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.046277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.046358 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.046376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.046398 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.046414 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.150089 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.150130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.150138 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.150152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.150163 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.251934 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.251971 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.251979 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.251993 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.252002 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.354756 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.354831 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.354852 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.354876 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.354892 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.457864 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.457910 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.457919 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.457938 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.457951 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.560764 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.560844 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.560865 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.560890 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.560907 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.662888 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.662942 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.662953 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.662971 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.662985 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.766472 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.766548 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.766565 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.766594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.766613 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.782692 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.782740 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.782692 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:09 crc kubenswrapper[4924]: E1211 13:54:09.782854 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:09 crc kubenswrapper[4924]: E1211 13:54:09.782909 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:09 crc kubenswrapper[4924]: E1211 13:54:09.783028 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.869180 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.869250 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.869259 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.869272 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.869281 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.971944 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.972023 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.972062 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.972095 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:09 crc kubenswrapper[4924]: I1211 13:54:09.972120 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:09Z","lastTransitionTime":"2025-12-11T13:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.074415 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.074453 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.074463 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.074478 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.074488 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.177212 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.177508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.177609 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.177708 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.177804 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.281080 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.281119 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.281127 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.281142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.281152 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.383411 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.383743 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.383879 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.383968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.384058 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.486931 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.487215 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.487281 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.487370 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.487467 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.589796 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.589829 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.589839 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.589855 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.589865 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.692091 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.692416 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.692521 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.692616 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.692697 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.782620 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:10 crc kubenswrapper[4924]: E1211 13:54:10.783133 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.795369 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.795420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.795453 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.795501 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.795519 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.898149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.898197 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.898210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.898229 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:10 crc kubenswrapper[4924]: I1211 13:54:10.898242 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:10Z","lastTransitionTime":"2025-12-11T13:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.001371 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.001422 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.001433 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.001449 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.001460 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.103694 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.103770 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.103806 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.103837 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.103859 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.206825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.206868 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.206883 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.206902 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.206915 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.259511 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.259555 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.259565 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.259581 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.259592 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.275155 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.279505 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.279605 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.279667 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.279738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.279798 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.293425 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.297686 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.297736 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.297802 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.297827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.297846 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.319194 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.324720 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.325203 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.325463 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.325673 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.325896 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.341795 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.346115 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.346185 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.346207 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.346233 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.346252 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.361236 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.361569 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.363385 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.363432 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.363449 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.363475 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.363494 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.466863 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.466916 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.466936 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.466960 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.466977 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.569548 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.569608 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.569631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.569657 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.569674 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.670669 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.670829 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.671041 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:54:43.671023363 +0000 UTC m=+97.180504340 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.672795 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.672832 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.672841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.672858 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.672867 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.775443 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.775484 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.775495 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.775515 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.775531 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.782728 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.782795 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.782804 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.782929 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.783005 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:11 crc kubenswrapper[4924]: E1211 13:54:11.783185 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.877090 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.877117 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.877124 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.877136 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.877145 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.979068 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.979105 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.979117 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.979132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:11 crc kubenswrapper[4924]: I1211 13:54:11.979144 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:11Z","lastTransitionTime":"2025-12-11T13:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.081565 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.081629 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.081646 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.081671 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.081689 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.184588 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.184913 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.184990 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.185065 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.185124 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.286871 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.287182 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.287322 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.287467 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.287563 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.389746 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.389780 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.389791 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.389805 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.389816 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.477789 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/0.log" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.477838 4924 generic.go:334] "Generic (PLEG): container finished" podID="5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c" containerID="ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a" exitCode=1 Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.477871 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerDied","Data":"ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.478394 4924 scope.go:117] "RemoveContainer" containerID="ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.492034 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.492231 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.492363 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.492373 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.492388 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.492399 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.506194 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.519008 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.540198 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.552210 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.564062 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.576186 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.587730 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.594277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.594311 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.594337 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.594354 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.594366 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.599796 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.613775 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.623108 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.632858 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.641366 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.652550 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.660652 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.670251 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.679281 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.697290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.697337 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.697347 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.697362 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.697372 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.782686 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:12 crc kubenswrapper[4924]: E1211 13:54:12.785403 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.799523 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.799556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.799567 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.799580 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.799589 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.901665 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.901725 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.901740 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.901755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:12 crc kubenswrapper[4924]: I1211 13:54:12.901765 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:12Z","lastTransitionTime":"2025-12-11T13:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.004119 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.004189 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.004200 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.004216 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.004227 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.106343 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.106372 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.106384 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.106402 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.106412 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.209237 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.209271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.209280 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.209295 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.209305 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.311028 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.311068 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.311079 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.311096 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.311108 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.413638 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.413675 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.413689 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.413705 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.413717 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.482087 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/0.log" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.482131 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerStarted","Data":"59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.496739 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.507856 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.516123 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.516162 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.516174 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.516191 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.516202 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.521639 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.532086 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.541182 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.550233 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.559228 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.571579 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.582631 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.593587 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.612524 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.618202 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.618249 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.618260 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.618277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.618287 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.625995 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.638117 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.650826 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.663216 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.675349 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.690106 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.720958 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.721152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.721224 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.721286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.721369 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.782777 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:13 crc kubenswrapper[4924]: E1211 13:54:13.782909 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.782777 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:13 crc kubenswrapper[4924]: E1211 13:54:13.783128 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.782803 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:13 crc kubenswrapper[4924]: E1211 13:54:13.783378 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.828309 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.828362 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.828371 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.828386 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.828394 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.930107 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.930133 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.930141 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.930152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:13 crc kubenswrapper[4924]: I1211 13:54:13.930161 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:13Z","lastTransitionTime":"2025-12-11T13:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.032904 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.032962 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.032970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.032983 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.032992 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.135473 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.135508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.135517 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.135533 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.135542 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.240242 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.240609 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.240718 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.240822 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.240970 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.343976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.344014 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.344024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.344037 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.344046 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.447688 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.447726 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.447734 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.447749 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.447758 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.550375 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.550439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.550454 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.550474 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.550486 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.653419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.653455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.653466 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.653482 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.653490 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.756538 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.756582 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.756594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.756613 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.756624 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.782383 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:14 crc kubenswrapper[4924]: E1211 13:54:14.782946 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.859396 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.859447 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.859461 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.859478 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.859489 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.961735 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.961786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.961798 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.961816 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:14 crc kubenswrapper[4924]: I1211 13:54:14.961828 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:14Z","lastTransitionTime":"2025-12-11T13:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.064554 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.064589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.064597 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.064610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.064621 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.167789 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.167819 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.167827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.167841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.167850 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.270407 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.270456 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.270466 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.270480 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.270490 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.372985 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.373031 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.373045 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.373062 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.373073 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.475575 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.475851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.475934 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.476032 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.476127 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.578435 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.578487 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.578506 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.578525 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.578537 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.680818 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.681115 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.681197 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.681279 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.681358 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.781966 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:15 crc kubenswrapper[4924]: E1211 13:54:15.782409 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.782090 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:15 crc kubenswrapper[4924]: E1211 13:54:15.782628 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.782036 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:15 crc kubenswrapper[4924]: E1211 13:54:15.782799 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.783280 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.783314 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.783337 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.783350 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.783358 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.886251 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.887055 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.887162 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.887250 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.887365 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.989580 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.989620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.989631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.989647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:15 crc kubenswrapper[4924]: I1211 13:54:15.989658 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:15Z","lastTransitionTime":"2025-12-11T13:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.091625 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.091663 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.091673 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.091688 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.091699 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.194291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.194341 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.194350 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.194365 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.194375 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.296502 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.296540 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.296550 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.296564 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.296573 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.398760 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.398799 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.398811 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.398829 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.398842 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.501509 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.501548 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.501559 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.501578 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.501589 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.604594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.604636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.604647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.604664 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.604676 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.709615 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.709653 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.709662 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.709675 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.709684 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.782402 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:16 crc kubenswrapper[4924]: E1211 13:54:16.782591 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.795720 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.805972 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.811456 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.811484 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.811495 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.811508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.811518 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.833112 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.846079 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.860440 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.873385 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.886136 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.895764 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.904034 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.913471 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.913525 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.913535 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.913549 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.913559 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:16Z","lastTransitionTime":"2025-12-11T13:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.914526 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.925908 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.938512 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.949771 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.962022 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.974598 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.988357 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:16 crc kubenswrapper[4924]: I1211 13:54:16.998175 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.016281 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.016317 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.016338 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.016353 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.016363 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.119301 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.119368 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.119382 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.119398 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.119411 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.221404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.221427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.221434 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.221446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.221454 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.323818 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.323856 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.323866 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.323880 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.323892 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.426591 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.426632 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.426648 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.426670 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.426688 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.529534 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.529622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.529632 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.529652 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.529663 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.631938 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.631961 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.631970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.631982 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.631990 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.734241 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.734278 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.734286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.734303 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.734317 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.782626 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.782686 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.782703 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:17 crc kubenswrapper[4924]: E1211 13:54:17.782788 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:17 crc kubenswrapper[4924]: E1211 13:54:17.782881 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:17 crc kubenswrapper[4924]: E1211 13:54:17.783005 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.837345 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.837376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.837384 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.837398 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.837407 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.939236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.939577 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.939653 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.939691 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:17 crc kubenswrapper[4924]: I1211 13:54:17.939709 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:17Z","lastTransitionTime":"2025-12-11T13:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.042821 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.042866 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.042881 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.042898 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.042911 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.145760 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.145803 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.145811 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.145827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.145837 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.248863 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.248892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.248900 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.248913 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.248922 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.351359 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.351397 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.351406 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.351420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.351430 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.454543 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.454589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.454601 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.454618 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.454628 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.558079 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.558132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.558141 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.558158 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.558168 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.660628 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.660654 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.660662 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.660675 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.660684 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.763733 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.763760 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.763769 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.763782 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.763790 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.782301 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:18 crc kubenswrapper[4924]: E1211 13:54:18.782488 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.866283 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.866317 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.866347 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.866369 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.866381 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.968333 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.968383 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.968394 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.968409 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:18 crc kubenswrapper[4924]: I1211 13:54:18.968421 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:18Z","lastTransitionTime":"2025-12-11T13:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.071205 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.071239 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.071249 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.071265 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.071275 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.173932 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.174026 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.174045 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.174069 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.174085 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.276590 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.276636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.276647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.276663 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.276678 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.378483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.378524 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.378533 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.378550 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.378560 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.481404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.481729 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.481856 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.481968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.482064 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.585625 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.585676 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.585687 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.585708 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.585723 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.689575 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.689620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.689631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.689653 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.689668 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.782892 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.783007 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:19 crc kubenswrapper[4924]: E1211 13:54:19.783027 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.783058 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:19 crc kubenswrapper[4924]: E1211 13:54:19.783407 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:19 crc kubenswrapper[4924]: E1211 13:54:19.783525 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.791878 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.791924 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.791935 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.791974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.792007 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.894361 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.894391 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.894423 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.894441 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.894451 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.997401 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.997441 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.997455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.997473 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:19 crc kubenswrapper[4924]: I1211 13:54:19.997487 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:19Z","lastTransitionTime":"2025-12-11T13:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.099686 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.099742 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.099756 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.099777 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.099790 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.202203 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.202234 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.202242 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.202258 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.202266 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.305710 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.305780 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.305793 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.305811 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.305826 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.408485 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.408851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.408982 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.409138 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.409248 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.511407 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.511456 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.511467 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.511483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.511494 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.614045 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.614101 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.614113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.614131 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.614140 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.717132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.717175 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.717187 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.717209 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.717254 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.782867 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:20 crc kubenswrapper[4924]: E1211 13:54:20.783454 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.783803 4924 scope.go:117] "RemoveContainer" containerID="5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.819277 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.819832 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.819842 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.819855 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.819865 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.923440 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.923503 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.923517 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.923536 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:20 crc kubenswrapper[4924]: I1211 13:54:20.923548 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:20Z","lastTransitionTime":"2025-12-11T13:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.027248 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.027302 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.027314 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.027344 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.027358 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.131023 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.131110 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.131137 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.131175 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.131201 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.234236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.234289 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.234301 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.234319 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.234353 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.337534 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.337596 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.337610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.337635 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.337650 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.367204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.367260 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.367271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.367293 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.367308 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.382805 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.388716 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.388740 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.388749 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.388764 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.388775 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.405094 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.412123 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.412159 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.412170 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.412189 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.412204 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.424855 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.428743 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.428783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.428795 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.428812 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.428824 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.444986 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.449140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.449191 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.449207 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.449225 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.449240 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.467724 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.467871 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.469639 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.469682 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.469704 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.469733 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.469749 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.512194 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/2.log" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.519156 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.519663 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.543084 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.563543 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.572204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.572243 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.572255 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.572271 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.572281 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.580169 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.592088 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.606173 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.616711 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.628843 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.639722 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.649832 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.662302 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.674476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.674518 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.674526 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.674541 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.674550 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.676616 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.686985 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.697240 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.705889 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.720832 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.730924 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.742804 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.776559 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.776600 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.776610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.776627 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.776645 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.782813 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.782922 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.782813 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.782979 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.782813 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:21 crc kubenswrapper[4924]: E1211 13:54:21.783071 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.879128 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.879166 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.879177 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.879236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.879259 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.981120 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.981170 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.981178 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.981196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:21 crc kubenswrapper[4924]: I1211 13:54:21.981206 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:21Z","lastTransitionTime":"2025-12-11T13:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.083585 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.083648 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.083664 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.083685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.083699 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.186108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.186146 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.186155 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.186170 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.186179 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.288743 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.288785 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.288796 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.288812 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.288824 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.390899 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.390936 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.390948 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.390962 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.390973 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.493642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.493711 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.493727 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.493745 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.493756 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.523530 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/3.log" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.524009 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/2.log" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.526716 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" exitCode=1 Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.526749 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.526779 4924 scope.go:117] "RemoveContainer" containerID="5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.527356 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 13:54:22 crc kubenswrapper[4924]: E1211 13:54:22.527482 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.547831 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.558307 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.579032 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c31e04fcddb0e9d1aaaf69161b3318678212cd96435cfb30c1cc21103e4901c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:53:51Z\\\",\\\"message\\\":\\\"us:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1211 13:53:51.046088 6560 services_controller.go:451] Built service openshift-console/downloads cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-console/downloads_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-console/downloads\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.213\\\\\\\", Port:80, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1211 13:53:51.045961 6560 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for node Informer during admin network policy controller initialization, handler {0x1fcb760 0x1fcb440 0x1fcb3e0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:22Z\\\",\\\"message\\\":\\\"penshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 13:54:21.885388 6961 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.885431 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.886132 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:54:21.886193 6961 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 13:54:21.886218 6961 factory.go:656] Stopping watch factory\\\\nI1211 13:54:21.886240 6961 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:54:21.886265 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:54:21.886940 6961 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 13:54:21.886957 6961 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 13:54:21.887010 6961 ovnkube.go:599] Stopped ovnkube\\\\nI1211 13:54:21.887039 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 13:54:21.887115 6961 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.591553 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.596360 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.596391 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.596399 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.596413 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.596421 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.605040 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.617057 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.631304 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.645183 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.656704 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.668079 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.678392 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.690078 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.701236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.701278 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.701290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.701305 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.701316 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.704220 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.714043 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.727203 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.738470 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.751034 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.782585 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:22 crc kubenswrapper[4924]: E1211 13:54:22.782742 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.803820 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.803865 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.803877 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.803892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.803905 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.909668 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.909706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.909714 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.909728 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:22 crc kubenswrapper[4924]: I1211 13:54:22.909738 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:22Z","lastTransitionTime":"2025-12-11T13:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.011866 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.011909 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.011917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.011931 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.011941 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.115089 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.115133 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.115149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.115169 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.115183 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.217470 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.217507 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.217516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.217529 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.217539 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.320002 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.320050 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.320087 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.320105 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.320118 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.423058 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.423136 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.423160 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.423190 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.423210 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.526812 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.526859 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.526872 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.526891 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.526906 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.532396 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/3.log" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.537624 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 13:54:23 crc kubenswrapper[4924]: E1211 13:54:23.537917 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.558475 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.572666 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.585430 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.595825 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.608517 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.624396 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.628781 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.628819 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.628827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.628841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.628850 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.633640 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.646900 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.655879 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.675556 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.689101 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.706919 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:22Z\\\",\\\"message\\\":\\\"penshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 13:54:21.885388 6961 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.885431 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.886132 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:54:21.886193 6961 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 13:54:21.886218 6961 factory.go:656] Stopping watch factory\\\\nI1211 13:54:21.886240 6961 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:54:21.886265 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:54:21.886940 6961 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 13:54:21.886957 6961 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 13:54:21.887010 6961 ovnkube.go:599] Stopped ovnkube\\\\nI1211 13:54:21.887039 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 13:54:21.887115 6961 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:54:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.726439 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.730963 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.731013 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.731024 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.731040 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.731052 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.745630 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.764663 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.781953 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.782089 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.782185 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.782072 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:23 crc kubenswrapper[4924]: E1211 13:54:23.782300 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:23 crc kubenswrapper[4924]: E1211 13:54:23.782242 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:23 crc kubenswrapper[4924]: E1211 13:54:23.782401 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.794610 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.833077 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.833122 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.833135 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.833150 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.833161 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.935827 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.935882 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.935898 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.935921 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:23 crc kubenswrapper[4924]: I1211 13:54:23.935938 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:23Z","lastTransitionTime":"2025-12-11T13:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.039367 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.039453 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.039482 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.039512 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.039543 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.142648 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.142685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.142694 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.142709 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.142719 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.245069 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.245136 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.245153 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.245178 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.245194 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.348114 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.348171 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.348184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.348209 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.348222 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.451059 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.451120 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.451134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.451159 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.451175 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.554622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.554685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.554703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.554726 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.554742 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.657572 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.657602 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.657610 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.657624 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.657632 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.759815 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.759868 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.759879 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.759896 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.759908 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.782494 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:24 crc kubenswrapper[4924]: E1211 13:54:24.782702 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.862670 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.862746 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.862765 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.862790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.862807 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.965618 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.965655 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.965663 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.965677 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:24 crc kubenswrapper[4924]: I1211 13:54:24.965685 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:24Z","lastTransitionTime":"2025-12-11T13:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.067393 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.067456 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.067467 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.067481 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.067491 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.169455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.169495 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.169505 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.169520 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.169530 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.272251 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.272291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.272303 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.272318 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.272340 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.374973 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.375059 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.375092 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.375121 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.375143 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.478828 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.478896 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.478909 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.478932 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.478947 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.582001 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.582038 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.582049 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.582062 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.582073 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.684272 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.684320 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.684350 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.684371 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.684389 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.782810 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.782897 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.782934 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:25 crc kubenswrapper[4924]: E1211 13:54:25.783031 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:25 crc kubenswrapper[4924]: E1211 13:54:25.783161 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:25 crc kubenswrapper[4924]: E1211 13:54:25.783318 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.786422 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.786488 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.786506 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.786531 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.786549 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.889483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.889542 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.889556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.889572 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.889584 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.992512 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.992563 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.992571 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.992586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:25 crc kubenswrapper[4924]: I1211 13:54:25.992596 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:25Z","lastTransitionTime":"2025-12-11T13:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.095516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.095572 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.095587 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.095608 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.095621 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.198787 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.198851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.198868 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.198892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.198909 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.300783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.300824 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.300835 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.300852 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.300864 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.403696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.403768 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.403789 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.403821 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.403843 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.507472 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.507532 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.507549 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.507571 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.507588 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.610591 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.610639 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.610651 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.610669 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.610681 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.713184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.713232 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.713245 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.713263 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.713275 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.782086 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:26 crc kubenswrapper[4924]: E1211 13:54:26.782220 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.799255 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.815094 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.815958 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.815984 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.815994 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.816009 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.816020 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.832809 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:22Z\\\",\\\"message\\\":\\\"penshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 13:54:21.885388 6961 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.885431 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.886132 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:54:21.886193 6961 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 13:54:21.886218 6961 factory.go:656] Stopping watch factory\\\\nI1211 13:54:21.886240 6961 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:54:21.886265 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:54:21.886940 6961 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 13:54:21.886957 6961 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 13:54:21.887010 6961 ovnkube.go:599] Stopped ovnkube\\\\nI1211 13:54:21.887039 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 13:54:21.887115 6961 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:54:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.845831 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.860509 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.880021 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.893551 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.904343 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.915146 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.918275 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.918340 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.918351 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.918366 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.918377 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:26Z","lastTransitionTime":"2025-12-11T13:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.925566 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.938207 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.972413 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:26 crc kubenswrapper[4924]: I1211 13:54:26.991569 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.004625 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.017621 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.020304 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.020353 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.020364 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.020379 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.020391 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.029758 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.037812 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.122712 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.122739 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.122747 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.122761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.122772 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.226624 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.226672 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.226699 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.226728 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.226751 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.329093 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.329142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.329154 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.329174 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.329194 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.432026 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.432063 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.432072 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.432085 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.432094 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.534581 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.534614 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.534622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.534644 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.534759 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.637373 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.637626 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.637691 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.637763 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.637829 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.739558 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.739882 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.739972 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.740062 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.740140 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.782928 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.783005 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.782927 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:27 crc kubenswrapper[4924]: E1211 13:54:27.783043 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:27 crc kubenswrapper[4924]: E1211 13:54:27.783133 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:27 crc kubenswrapper[4924]: E1211 13:54:27.783232 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.842876 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.843191 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.843268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.843364 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.843474 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.946714 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.946755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.946766 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.946783 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:27 crc kubenswrapper[4924]: I1211 13:54:27.946794 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:27Z","lastTransitionTime":"2025-12-11T13:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.049376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.049467 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.049478 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.049493 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.049503 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.151372 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.151412 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.151422 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.151435 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.151443 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.254417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.254477 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.254493 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.254516 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.254531 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.356764 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.356809 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.356822 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.356838 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.356851 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.459022 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.459053 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.459061 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.459073 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.459081 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.562073 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.562108 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.562119 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.562134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.562146 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.665130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.665200 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.665218 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.665243 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.665263 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.767268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.767304 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.767313 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.767338 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.767347 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.782756 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:28 crc kubenswrapper[4924]: E1211 13:54:28.782939 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.870152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.870201 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.870218 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.870243 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.870260 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.972721 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.972784 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.972801 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.972823 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:28 crc kubenswrapper[4924]: I1211 13:54:28.972841 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:28Z","lastTransitionTime":"2025-12-11T13:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.075546 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.075590 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.075606 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.075628 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.075645 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.179584 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.179640 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.179658 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.179682 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.179699 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.282806 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.282873 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.282896 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.282925 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.282946 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.386220 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.386252 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.386261 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.386274 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.386286 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.488696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.488729 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.488737 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.488749 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.488759 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.555662 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.555889 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.555870059 +0000 UTC m=+147.065351036 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.591261 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.591313 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.591348 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.591372 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.591384 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.657359 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.657752 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.657547 4924 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.657792 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.657839 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.657852 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.65783246 +0000 UTC m=+147.167313437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658018 4924 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658027 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658042 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.658035305 +0000 UTC m=+147.167516282 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658055 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658074 4924 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658143 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.658120847 +0000 UTC m=+147.167601864 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658158 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658169 4924 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658179 4924 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.658200 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.658193789 +0000 UTC m=+147.167674766 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.694562 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.694595 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.694604 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.694619 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.694627 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.782000 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.782052 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.782211 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.782234 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.782317 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:29 crc kubenswrapper[4924]: E1211 13:54:29.782395 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.797871 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.797905 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.797915 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.797930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.797941 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.900131 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.900181 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.900199 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.900231 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:29 crc kubenswrapper[4924]: I1211 13:54:29.900253 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:29Z","lastTransitionTime":"2025-12-11T13:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.002427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.002485 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.002501 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.002527 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.002551 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.105526 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.105572 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.105584 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.105601 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.105613 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.208578 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.208641 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.208659 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.208686 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.208707 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.311694 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.311745 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.311755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.311778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.311789 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.415157 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.415219 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.415236 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.415260 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.415277 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.518204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.518602 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.518621 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.518673 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.518692 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.625222 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.625268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.625282 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.625301 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.625315 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.728857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.728933 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.728951 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.728976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.728998 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.783558 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:30 crc kubenswrapper[4924]: E1211 13:54:30.783958 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.806432 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.831536 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.831577 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.831585 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.831599 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.831610 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.934062 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.934116 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.934132 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.934155 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:30 crc kubenswrapper[4924]: I1211 13:54:30.934174 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:30Z","lastTransitionTime":"2025-12-11T13:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.037240 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.037303 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.037352 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.037382 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.037399 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.140799 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.140862 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.140879 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.140903 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.140919 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.243862 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.243918 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.243935 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.243957 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.243975 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.346901 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.346941 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.346952 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.346968 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.346977 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.450464 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.450531 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.450549 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.450574 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.450592 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.553948 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.554015 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.554039 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.554070 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.554091 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.582975 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.583007 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.583015 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.583028 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.583036 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.602316 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.606988 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.607021 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.607032 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.607048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.607059 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.622589 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.627225 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.627298 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.627316 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.627377 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.627396 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.642285 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.646861 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.646911 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.646923 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.646941 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.646954 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.666119 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.670307 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.670408 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.670427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.670451 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.670470 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.691250 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:31Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.691515 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.693585 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.693636 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.693648 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.693666 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.693680 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.782892 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.782938 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.782999 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.783080 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.783188 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:31 crc kubenswrapper[4924]: E1211 13:54:31.783267 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.795961 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.796011 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.796025 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.796044 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.796060 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.898403 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.898442 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.898450 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.898465 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:31 crc kubenswrapper[4924]: I1211 13:54:31.898517 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:31Z","lastTransitionTime":"2025-12-11T13:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.001508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.001557 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.001575 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.001599 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.001616 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.103782 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.103817 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.103825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.103837 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.103849 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.206812 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.206860 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.206871 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.206888 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.206900 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.309258 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.309294 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.309303 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.309317 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.309345 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.412508 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.412586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.412599 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.412617 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.412630 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.515272 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.515309 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.515318 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.515350 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.515359 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.618104 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.618157 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.618170 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.618186 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.618197 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.721090 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.721137 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.721148 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.721168 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.721194 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.782689 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:32 crc kubenswrapper[4924]: E1211 13:54:32.782851 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.825040 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.825098 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.825116 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.825140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.825157 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.928293 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.928405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.928438 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.928468 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:32 crc kubenswrapper[4924]: I1211 13:54:32.928492 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:32Z","lastTransitionTime":"2025-12-11T13:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.031143 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.031588 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.031618 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.031639 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.031653 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.134404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.134465 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.134476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.134503 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.134517 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.237799 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.237865 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.237877 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.237899 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.237914 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.340634 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.340695 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.340710 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.340733 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.340759 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.444544 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.444625 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.444634 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.444711 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.444724 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.547002 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.547061 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.547072 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.547100 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.547113 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.649669 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.649717 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.649729 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.649753 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.649778 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.752776 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.752822 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.752831 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.752845 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.752859 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.782129 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.782184 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:33 crc kubenswrapper[4924]: E1211 13:54:33.782257 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.782293 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:33 crc kubenswrapper[4924]: E1211 13:54:33.782409 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:33 crc kubenswrapper[4924]: E1211 13:54:33.782467 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.854686 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.854720 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.854731 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.854749 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.854760 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.957847 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.957882 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.957891 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.957904 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:33 crc kubenswrapper[4924]: I1211 13:54:33.957916 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:33Z","lastTransitionTime":"2025-12-11T13:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.060870 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.060936 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.060948 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.060964 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.060976 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.163853 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.163919 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.163930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.163945 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.163958 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.266906 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.266954 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.266966 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.266981 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.266991 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.369518 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.369560 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.369572 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.369589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.369599 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.471924 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.471979 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.471997 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.472020 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.472039 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.574642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.574709 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.574731 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.574761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.574786 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.678232 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.678295 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.678307 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.678343 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.678355 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.781631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.781678 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.781687 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.781703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.781714 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.781995 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:34 crc kubenswrapper[4924]: E1211 13:54:34.782105 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.884153 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.884217 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.884239 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.884268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.884292 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.987252 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.987315 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.987361 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.987386 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:34 crc kubenswrapper[4924]: I1211 13:54:34.987403 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:34Z","lastTransitionTime":"2025-12-11T13:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.090369 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.090411 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.090426 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.090441 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.090451 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.192502 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.192554 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.192566 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.192583 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.192596 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.294933 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.294970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.294986 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.295001 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.295010 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.400036 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.400120 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.400142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.400171 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.400202 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.503620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.503669 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.503680 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.503697 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.503709 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.606364 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.606416 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.606427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.606444 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.606456 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.709436 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.709778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.710016 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.710263 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.710522 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.782571 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.782573 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:35 crc kubenswrapper[4924]: E1211 13:54:35.783021 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.782624 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:35 crc kubenswrapper[4924]: E1211 13:54:35.783300 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:35 crc kubenswrapper[4924]: E1211 13:54:35.782909 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.814136 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.814511 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.814746 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.814961 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.815148 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.918399 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.918743 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.918952 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.919146 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:35 crc kubenswrapper[4924]: I1211 13:54:35.919411 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:35Z","lastTransitionTime":"2025-12-11T13:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.021998 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.022438 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.022689 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.022909 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.023072 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.127157 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.127204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.127215 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.127233 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.127244 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.230164 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.230599 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.230731 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.230884 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.231035 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.333647 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.333701 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.333712 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.333731 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.333742 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.436220 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.436283 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.436299 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.436322 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.436373 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.539542 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.539598 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.539611 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.539629 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.539642 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.642211 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.642275 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.642290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.642308 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.642318 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.744539 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.744594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.744605 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.744622 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.744632 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.782069 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:36 crc kubenswrapper[4924]: E1211 13:54:36.782460 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.795041 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.814315 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1fd49f92-5a54-4dbc-9089-7dcaa313fb42\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b20e783fafccdd7012230bee9cd575f303a9c5488c3253f8e470876ebf90e7ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38c1736644bacfcc11ac5b561980eb74201f152c42a28c8d9557009e5f91e847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f16cce4dab4c6291e6beabb631907b2f1157aa7c7ca8a185d9d3084a5cef254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55f3969dfa2b059bd160be81a582586cd451366935585db5d78ce3ba11fa0e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2ec27fdd325767098bbaa94839e2c2fe617b53a9961ae2063af4c71f0b9e78b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cba992df5b9219ec98e2e42fa684c0cf2bbf4547d386ad145431956b68aee70\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cba992df5b9219ec98e2e42fa684c0cf2bbf4547d386ad145431956b68aee70\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b261b45eec39778ff68af5eec9249e0e3558ac5d37233ffc1f6448ff0ad614d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b261b45eec39778ff68af5eec9249e0e3558ac5d37233ffc1f6448ff0ad614d9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c0b680c26a54870352206907c7e627fdebe22d7e9d01406d28ae63bc7aef98be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0b680c26a54870352206907c7e627fdebe22d7e9d01406d28ae63bc7aef98be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.827153 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56f472021a9fd8c5cec4315eb98f28f49fc9d66dd9fcbee7d879f27b2a2a5845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.841687 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x9vcv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5cac4fc-9d62-4680-9f70-650c4c118a9e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea46ad408b8ee5369bb2f1e335734384e030d58e74f28f7bf17d94cd32572661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5m66\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x9vcv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.847400 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.847446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.847457 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.847476 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.847490 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.863973 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-j8qls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829d010-f239-43e9-9775-6dc41c5e83c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae27c25987adbfdef8e2b94c1839946206a150c94024e1ad4bbf11848c6fb36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af49efbcb88df9c80f437b4ec9eface049cdb060587ae16785abe9bb4e59be8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://414d7c775906eb6e441a3a184cd64cce47a790d6002c3953e3e0a4d04d8af0ef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6af50909b1b8367b10fc653f0887429b6f2eed51fd7284d7239abd844c779d1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4100227d2004add787ed84589447f2c178ed190fac6ee011a7adb9dbc2d3310\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66f4a057fe2364aae357dcb161b28471693bdefb6d7b686674d2c747ad1119fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0eddc8246969d161eadf90024f7ca8b43db33d20243beb5d00ed9d38367f5f8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5j7jw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-j8qls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.873666 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wjmj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544b1b24-246d-42dc-83f2-b5cbd3b2e927\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6dc444656b62544df90ad6958ea41b787dbc9998777c308657e9ed636595a51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qrhqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wjmj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.886572 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.897703 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafc4b5e-18de-4683-b008-775c510f12bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://670be56eb35150b11df14a007cde8c302a7257ba5933d03ddb5e309e27adea72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8n98m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rfwqf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.913525 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47432eab-9072-43ce-9bf7-0dbd6fa271e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:22Z\\\",\\\"message\\\":\\\"penshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1211 13:54:21.885388 6961 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.885431 6961 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1211 13:54:21.886132 6961 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 13:54:21.886193 6961 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 13:54:21.886218 6961 factory.go:656] Stopping watch factory\\\\nI1211 13:54:21.886240 6961 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 13:54:21.886265 6961 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 13:54:21.886940 6961 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1211 13:54:21.886957 6961 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1211 13:54:21.887010 6961 ovnkube.go:599] Stopped ovnkube\\\\nI1211 13:54:21.887039 6961 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 13:54:21.887115 6961 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:54:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k8d9c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8jnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.927287 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5vrtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T13:54:12Z\\\",\\\"message\\\":\\\"2025-12-11T13:53:27+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5\\\\n2025-12-11T13:53:27+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d6f7cc06-3ddb-41b7-b1f0-15269fb08fd5 to /host/opt/cni/bin/\\\\n2025-12-11T13:53:27Z [verbose] multus-daemon started\\\\n2025-12-11T13:53:27Z [verbose] Readiness Indicator file check\\\\n2025-12-11T13:54:12Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wr4jt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:26Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5vrtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.940824 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ac2d7ff-9d46-4fe3-a299-9238182e04fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 13:53:20.409862 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:53:20.411202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-410172651/tls.crt::/tmp/serving-cert-410172651/tls.key\\\\\\\"\\\\nI1211 13:53:25.863525 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:53:25.866712 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:53:25.866731 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:53:25.866751 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:53:25.866757 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:53:25.875065 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:53:25.875095 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875100 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:53:25.875105 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:53:25.875109 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:53:25.875114 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:53:25.875117 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:53:25.875596 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:53:25.876931 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.950041 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.950077 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.950087 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.950121 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.950131 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:36Z","lastTransitionTime":"2025-12-11T13:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.953706 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://40c694ea39c7bfa4155e34c571a8dd281a8b5e0f014d4e639f89c906c4933bf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.969461 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:36 crc kubenswrapper[4924]: I1211 13:54:36.988817 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:36Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.008686 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2962fbe5-b421-4ad9-a868-6f8db1af969a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219dec391cbbb91d592946456dc0fbcd1c5f8fe1efae158afa616ff0a0d2dcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48c9e3179e357e9f007a9360d7bce11434f102e4588548919eb729ad9e41bb78\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cfb9618ad2041f61bc027db165392baf81af2f96de8bf74a0f17438fb22d7e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.020040 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abfba846-c3f8-4800-bd0d-28d88ca06293\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24b8358e1deeedd413c7f08ee28d7e2e91bea20d3aed9e8bf10b99f97ada52ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f74b763d1ca468a7d805e6ca51a2547d49bb852f2e43357a4f3272ede5d362\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e88440ad93860b36af3e47a61c875a1625771a12e382d15a6a02829554cec92f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fafde909a676db1a9d79d1126d0bf55507e17a7b605f1590f4020a7c40b479a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:53:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:53:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:07Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.031665 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af5b9394d6e200e9009fc8f2efc2d8438cfc720484dbf07833892430d2c42e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.044390 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b1ac75b-7e02-4289-a207-c105e63a2fdc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b59795c8878b4f9d163f37c214bcc2636051be98b79768b6efed20330473b21b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce86aadf71d5587b7108067af4aae5297ff514d02edbf818beb4b2a28b2c8452\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7s2cg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-7v2pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.052044 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.052066 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.052074 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.052086 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.052094 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.057550 4924 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-79mv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39f08493-e794-4e97-bc69-8faa67a120b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:53:39Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x9ws7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:53:39Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-79mv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.154204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.154252 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.154274 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.154294 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.154306 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.257184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.257260 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.257270 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.257283 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.257291 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.362183 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.362238 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.362250 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.362263 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.362272 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.465577 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.465627 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.465646 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.465670 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.465688 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.568778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.568839 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.568858 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.568882 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.568898 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.671591 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.671661 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.671678 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.671705 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.671722 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.773683 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.773766 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.773786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.773804 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.773812 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.782367 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.782417 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.782682 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:37 crc kubenswrapper[4924]: E1211 13:54:37.783008 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:37 crc kubenswrapper[4924]: E1211 13:54:37.783145 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.783245 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 13:54:37 crc kubenswrapper[4924]: E1211 13:54:37.783321 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:37 crc kubenswrapper[4924]: E1211 13:54:37.783451 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.876933 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.877005 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.877018 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.877036 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.877048 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.980791 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.980841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.980853 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.980872 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:37 crc kubenswrapper[4924]: I1211 13:54:37.980884 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:37Z","lastTransitionTime":"2025-12-11T13:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.083110 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.083155 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.083172 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.083193 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.083210 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.186464 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.186892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.187139 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.187378 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.187569 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.290512 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.290586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.290605 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.290631 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.290651 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.393504 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.393589 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.393611 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.393641 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.393663 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.496723 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.497069 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.497138 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.497212 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.497277 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.600707 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.601455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.601507 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.601543 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.601569 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.704943 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.704989 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.705001 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.705019 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.705032 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.782903 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:38 crc kubenswrapper[4924]: E1211 13:54:38.783191 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.807929 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.807988 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.808008 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.808033 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.808053 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.910182 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.910229 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.910241 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.910257 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:38 crc kubenswrapper[4924]: I1211 13:54:38.910269 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:38Z","lastTransitionTime":"2025-12-11T13:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.012361 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.012419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.012437 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.012460 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.012481 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.115578 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.115637 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.115655 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.115679 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.115700 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.218621 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.218698 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.218721 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.218749 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.218770 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.321270 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.321348 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.321365 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.321386 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.321400 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.423838 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.423889 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.423905 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.423930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.423949 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.526872 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.526940 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.526956 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.526980 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.526997 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.629571 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.629609 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.629618 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.629630 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.629640 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.732620 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.732703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.732729 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.732762 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.732785 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.782699 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.782734 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.783134 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:39 crc kubenswrapper[4924]: E1211 13:54:39.783374 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:39 crc kubenswrapper[4924]: E1211 13:54:39.783550 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:39 crc kubenswrapper[4924]: E1211 13:54:39.783715 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.836592 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.836655 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.836672 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.836696 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.836724 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.938944 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.938981 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.938991 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.939004 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:39 crc kubenswrapper[4924]: I1211 13:54:39.939013 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:39Z","lastTransitionTime":"2025-12-11T13:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.041171 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.041210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.041219 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.041232 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.041244 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.143613 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.143663 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.143682 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.143698 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.143710 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.247425 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.247488 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.247513 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.247542 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.247564 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.351353 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.351431 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.351458 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.351529 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.351557 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.454737 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.454807 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.454825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.454851 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.454871 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.558525 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.558625 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.558651 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.558713 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.558737 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.661653 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.661774 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.661788 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.661811 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.661824 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.765410 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.765455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.765469 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.765486 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.765497 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.782262 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:40 crc kubenswrapper[4924]: E1211 13:54:40.782559 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.868549 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.868618 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.868635 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.868660 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.868679 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.971740 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.971786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.971800 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.971817 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:40 crc kubenswrapper[4924]: I1211 13:54:40.971874 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:40Z","lastTransitionTime":"2025-12-11T13:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.075275 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.075319 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.075369 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.075389 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.075402 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.178294 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.178353 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.178366 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.178381 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.178393 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.280577 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.280617 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.280626 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.280642 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.280651 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.383011 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.383072 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.383081 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.383094 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.383105 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.486235 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.486311 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.486405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.486430 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.486444 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.589889 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.589950 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.589974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.590068 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.590091 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.694041 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.694079 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.694091 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.694106 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.694117 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.782014 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.782024 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:41 crc kubenswrapper[4924]: E1211 13:54:41.782192 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:41 crc kubenswrapper[4924]: E1211 13:54:41.782282 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.782051 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:41 crc kubenswrapper[4924]: E1211 13:54:41.782708 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.796597 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.796730 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.796745 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.796933 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.796968 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.900598 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.900699 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.900719 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.900742 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:41 crc kubenswrapper[4924]: I1211 13:54:41.900797 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:41Z","lastTransitionTime":"2025-12-11T13:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.002970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.003015 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.003025 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.003041 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.003080 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.089166 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.089491 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.089594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.089706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.089841 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.102458 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.107242 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.107281 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.107290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.107337 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.107349 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.120209 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.123756 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.123818 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.123836 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.123858 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.123872 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.135210 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.138289 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.138341 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.138351 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.138365 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.138374 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.149026 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.151836 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.151976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.152041 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.152121 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.152192 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.162540 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T13:54:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"13f79ec0-167e-4d1b-a988-47bfc5368a31\\\",\\\"systemUUID\\\":\\\"c872b68c-6ac6-4941-bce1-6e21ecaf912d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.162650 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.163758 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.163853 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.163930 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.163995 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.164058 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.266822 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.266889 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.266912 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.266941 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.266965 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.369810 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.370149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.370419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.370635 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.370843 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.474309 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.474779 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.474974 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.475321 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.475548 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.579303 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.579374 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.579388 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.579409 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.579423 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.682018 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.682116 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.682140 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.682170 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.682192 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.782534 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:42 crc kubenswrapper[4924]: E1211 13:54:42.782780 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.784836 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.784885 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.784905 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.784929 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.784944 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.891738 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.891790 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.891806 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.891825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.891844 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.994703 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.994737 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.994748 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.994762 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:42 crc kubenswrapper[4924]: I1211 13:54:42.994773 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:42Z","lastTransitionTime":"2025-12-11T13:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.097761 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.097804 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.097814 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.097828 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.097850 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.200149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.200542 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.200701 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.200837 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.200988 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.303936 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.304286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.304544 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.304741 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.304902 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.407376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.407420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.407435 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.407454 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.407468 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.509993 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.510103 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.510115 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.510134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.510145 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.612309 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.612417 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.612436 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.612460 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.612478 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.711019 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:43 crc kubenswrapper[4924]: E1211 13:54:43.711185 4924 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:54:43 crc kubenswrapper[4924]: E1211 13:54:43.711294 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs podName:39f08493-e794-4e97-bc69-8faa67a120b8 nodeName:}" failed. No retries permitted until 2025-12-11 13:55:47.711271864 +0000 UTC m=+161.220752901 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs") pod "network-metrics-daemon-79mv2" (UID: "39f08493-e794-4e97-bc69-8faa67a120b8") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.714907 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.714942 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.714951 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.714967 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.714977 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.782929 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.782933 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.783044 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:43 crc kubenswrapper[4924]: E1211 13:54:43.783229 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:43 crc kubenswrapper[4924]: E1211 13:54:43.783359 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:43 crc kubenswrapper[4924]: E1211 13:54:43.783652 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.817529 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.817581 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.817593 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.817608 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.817620 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.919594 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.919645 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.919659 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.919675 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:43 crc kubenswrapper[4924]: I1211 13:54:43.919688 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:43Z","lastTransitionTime":"2025-12-11T13:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.021853 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.021887 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.021897 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.021913 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.021923 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.124432 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.124463 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.124471 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.124483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.124491 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.226674 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.226725 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.226736 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.226752 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.226763 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.329928 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.330014 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.330046 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.330075 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.330096 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.433618 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.433699 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.433725 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.433760 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.433783 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.536691 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.536756 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.536777 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.536806 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.536829 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.639973 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.640282 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.640536 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.640717 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.640904 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.743081 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.743416 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.743543 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.743673 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.743789 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.782662 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:44 crc kubenswrapper[4924]: E1211 13:54:44.783078 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.846250 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.846291 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.846301 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.846317 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.846352 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.949746 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.949814 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.949832 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.949857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:44 crc kubenswrapper[4924]: I1211 13:54:44.949876 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:44Z","lastTransitionTime":"2025-12-11T13:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.052718 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.052756 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.052765 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.052778 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.052787 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.154885 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.155101 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.155203 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.155297 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.155500 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.258046 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.258144 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.258160 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.258188 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.258204 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.360543 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.360892 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.361092 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.361298 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.361586 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.464210 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.464297 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.464366 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.464402 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.464427 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.566857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.566932 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.566949 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.566976 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.566994 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.668970 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.669048 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.669067 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.669093 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.669111 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.771848 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.771879 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.771887 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.771900 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.771908 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.782115 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.782166 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.782192 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:45 crc kubenswrapper[4924]: E1211 13:54:45.782267 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:45 crc kubenswrapper[4924]: E1211 13:54:45.782399 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:45 crc kubenswrapper[4924]: E1211 13:54:45.782507 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.874825 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.874900 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.874927 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.874952 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.874969 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.978193 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.978246 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.978263 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.978286 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:45 crc kubenswrapper[4924]: I1211 13:54:45.978304 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:45Z","lastTransitionTime":"2025-12-11T13:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.081762 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.081809 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.081820 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.081838 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.081850 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.185102 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.185234 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.185261 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.185292 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.185314 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.289564 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.289632 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.289653 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.289677 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.289694 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.392363 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.392427 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.392472 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.392499 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.392517 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.495706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.495755 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.495770 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.495786 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.495799 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.599379 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.599448 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.599472 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.599498 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.599520 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.702149 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.702217 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.702439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.702455 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.702464 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.782416 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:46 crc kubenswrapper[4924]: E1211 13:54:46.782645 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.806040 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.806100 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.806118 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.806139 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.806154 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.833598 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=80.833578712 podStartE2EDuration="1m20.833578712s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.816184389 +0000 UTC m=+100.325665366" watchObservedRunningTime="2025-12-11 13:54:46.833578712 +0000 UTC m=+100.343059689" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.890274 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.890256599 podStartE2EDuration="1m16.890256599s" podCreationTimestamp="2025-12-11 13:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.889833448 +0000 UTC m=+100.399314435" watchObservedRunningTime="2025-12-11 13:54:46.890256599 +0000 UTC m=+100.399737566" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.890500 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=10.890413523 podStartE2EDuration="10.890413523s" podCreationTimestamp="2025-12-11 13:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.875983669 +0000 UTC m=+100.385464646" watchObservedRunningTime="2025-12-11 13:54:46.890413523 +0000 UTC m=+100.399894500" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.902536 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.902512915 podStartE2EDuration="49.902512915s" podCreationTimestamp="2025-12-11 13:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.901644012 +0000 UTC m=+100.411124999" watchObservedRunningTime="2025-12-11 13:54:46.902512915 +0000 UTC m=+100.411993902" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.908200 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.908290 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.908303 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.908349 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.908394 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:46Z","lastTransitionTime":"2025-12-11T13:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.926245 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-7v2pp" podStartSLOduration=80.926211135 podStartE2EDuration="1m20.926211135s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.925229999 +0000 UTC m=+100.434710986" watchObservedRunningTime="2025-12-11 13:54:46.926211135 +0000 UTC m=+100.435692112" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.979107 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.979086651 podStartE2EDuration="16.979086651s" podCreationTimestamp="2025-12-11 13:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.963064665 +0000 UTC m=+100.472545662" watchObservedRunningTime="2025-12-11 13:54:46.979086651 +0000 UTC m=+100.488567628" Dec 11 13:54:46 crc kubenswrapper[4924]: I1211 13:54:46.995297 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-x9vcv" podStartSLOduration=81.995277451 podStartE2EDuration="1m21.995277451s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:46.994944952 +0000 UTC m=+100.504425929" watchObservedRunningTime="2025-12-11 13:54:46.995277451 +0000 UTC m=+100.504758438" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.012444 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.012493 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.012510 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.012534 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.012553 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.012966 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-j8qls" podStartSLOduration=82.012945811 podStartE2EDuration="1m22.012945811s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:47.012931241 +0000 UTC m=+100.522412218" watchObservedRunningTime="2025-12-11 13:54:47.012945811 +0000 UTC m=+100.522426788" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.023464 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wjmj7" podStartSLOduration=82.02344826 podStartE2EDuration="1m22.02344826s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:47.022834984 +0000 UTC m=+100.532316001" watchObservedRunningTime="2025-12-11 13:54:47.02344826 +0000 UTC m=+100.532929247" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.078035 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podStartSLOduration=82.078019531 podStartE2EDuration="1m22.078019531s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:47.050790527 +0000 UTC m=+100.560271514" watchObservedRunningTime="2025-12-11 13:54:47.078019531 +0000 UTC m=+100.587500508" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.093714 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5vrtp" podStartSLOduration=82.093697048 podStartE2EDuration="1m22.093697048s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:47.092294191 +0000 UTC m=+100.601775168" watchObservedRunningTime="2025-12-11 13:54:47.093697048 +0000 UTC m=+100.603178025" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.114889 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.114917 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.114924 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.114945 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.114954 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.217504 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.217538 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.217548 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.217563 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.217574 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.321288 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.321381 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.321405 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.321434 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.321456 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.424117 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.424184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.424202 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.424226 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.424255 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.526993 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.527043 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.527057 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.527075 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.527087 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.629142 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.629227 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.629251 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.629288 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.629312 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.731371 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.731445 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.731463 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.731488 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.731506 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.782463 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.782529 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.782483 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:47 crc kubenswrapper[4924]: E1211 13:54:47.782635 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:47 crc kubenswrapper[4924]: E1211 13:54:47.782735 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:47 crc kubenswrapper[4924]: E1211 13:54:47.782860 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.833743 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.833831 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.833857 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.833889 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.833913 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.937586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.937685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.937705 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.937735 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:47 crc kubenswrapper[4924]: I1211 13:54:47.937755 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:47Z","lastTransitionTime":"2025-12-11T13:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.041082 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.041155 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.041180 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.041208 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.041229 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.144446 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.144506 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.144523 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.144546 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.144565 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.247386 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.247526 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.247540 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.247556 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.247567 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.350704 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.350759 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.350780 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.350808 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.350829 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.453598 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.453657 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.453675 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.453701 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.453716 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.556273 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.556318 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.556358 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.556733 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.556755 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.659816 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.659873 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.659889 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.659914 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.659930 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.763731 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.763817 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.763839 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.763868 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.763891 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.782988 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:48 crc kubenswrapper[4924]: E1211 13:54:48.783298 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.785046 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 13:54:48 crc kubenswrapper[4924]: E1211 13:54:48.785367 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.867035 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.867113 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.867134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.867165 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.867189 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.969408 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.969465 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.969481 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.969505 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:48 crc kubenswrapper[4924]: I1211 13:54:48.969526 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:48Z","lastTransitionTime":"2025-12-11T13:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.072615 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.072680 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.072697 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.072800 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.072817 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.175604 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.175670 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.175684 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.175700 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.175712 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.278626 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.278678 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.278695 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.278724 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.278745 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.381294 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.381406 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.381419 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.381459 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.381475 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.484479 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.484527 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.484537 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.484560 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.484571 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.586668 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.586694 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.586701 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.586713 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.586721 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.688911 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.688954 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.688965 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.688982 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.688995 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.782194 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.782250 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.782526 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:49 crc kubenswrapper[4924]: E1211 13:54:49.782772 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:49 crc kubenswrapper[4924]: E1211 13:54:49.782904 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:49 crc kubenswrapper[4924]: E1211 13:54:49.783150 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.791740 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.791795 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.791816 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.791841 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.791862 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.895272 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.895321 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.895407 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.895440 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.895460 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.998487 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.998551 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.998576 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.998605 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:49 crc kubenswrapper[4924]: I1211 13:54:49.998626 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:49Z","lastTransitionTime":"2025-12-11T13:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.101595 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.101657 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.101681 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.101745 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.101767 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.204268 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.204319 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.204357 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.204382 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.204399 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.306518 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.306555 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.306563 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.306576 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.306585 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.409846 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.409920 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.409954 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.409993 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.410022 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.513025 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.513097 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.513125 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.513154 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.513175 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.615925 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.615992 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.616016 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.616043 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.616064 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.718870 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.718935 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.718952 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.718978 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.719000 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.782560 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:50 crc kubenswrapper[4924]: E1211 13:54:50.782740 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.822241 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.822404 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.822431 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.822461 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.822484 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.925593 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.925646 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.925663 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.925685 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:50 crc kubenswrapper[4924]: I1211 13:54:50.925702 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:50Z","lastTransitionTime":"2025-12-11T13:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.028345 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.028376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.028386 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.028399 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.028407 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.132152 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.132196 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.132220 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.132239 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.132254 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.234764 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.234870 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.234888 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.234915 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.234932 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.338483 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.338660 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.338681 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.338706 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.338725 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.441437 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.441507 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.441530 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.441562 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.441586 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.544053 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.544111 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.544130 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.544153 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.544172 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.647376 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.647439 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.647459 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.647492 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.647510 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.750509 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.750568 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.750586 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.750612 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.750630 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.782950 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:51 crc kubenswrapper[4924]: E1211 13:54:51.783127 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.783402 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:51 crc kubenswrapper[4924]: E1211 13:54:51.783533 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.783617 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:51 crc kubenswrapper[4924]: E1211 13:54:51.783785 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.854381 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.854430 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.854440 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.854456 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.854468 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.957156 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.957224 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.957246 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.957273 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:51 crc kubenswrapper[4924]: I1211 13:54:51.957290 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:51Z","lastTransitionTime":"2025-12-11T13:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.059742 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.059793 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.059810 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.059831 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.059847 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:52Z","lastTransitionTime":"2025-12-11T13:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.162375 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.162412 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.162420 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.162434 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.162443 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:52Z","lastTransitionTime":"2025-12-11T13:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.266118 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.266184 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.266204 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.266230 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.266251 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:52Z","lastTransitionTime":"2025-12-11T13:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.370862 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.370958 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.370977 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.371001 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.371018 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:52Z","lastTransitionTime":"2025-12-11T13:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.444061 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.444117 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.444134 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.444162 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.444180 4924 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:54:52Z","lastTransitionTime":"2025-12-11T13:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.513239 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw"] Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.513886 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.516820 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.517149 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.517591 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.517619 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.605560 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/45ec9ecc-abc3-4150-9b65-192406decbc0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.605601 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45ec9ecc-abc3-4150-9b65-192406decbc0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.605653 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45ec9ecc-abc3-4150-9b65-192406decbc0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.605852 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/45ec9ecc-abc3-4150-9b65-192406decbc0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.605961 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ec9ecc-abc3-4150-9b65-192406decbc0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706469 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/45ec9ecc-abc3-4150-9b65-192406decbc0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706522 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ec9ecc-abc3-4150-9b65-192406decbc0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706543 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/45ec9ecc-abc3-4150-9b65-192406decbc0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706561 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45ec9ecc-abc3-4150-9b65-192406decbc0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706611 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45ec9ecc-abc3-4150-9b65-192406decbc0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706621 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/45ec9ecc-abc3-4150-9b65-192406decbc0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.706665 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/45ec9ecc-abc3-4150-9b65-192406decbc0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.707569 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45ec9ecc-abc3-4150-9b65-192406decbc0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.715925 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45ec9ecc-abc3-4150-9b65-192406decbc0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.728090 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45ec9ecc-abc3-4150-9b65-192406decbc0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-666xw\" (UID: \"45ec9ecc-abc3-4150-9b65-192406decbc0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.782038 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:52 crc kubenswrapper[4924]: E1211 13:54:52.782268 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:52 crc kubenswrapper[4924]: I1211 13:54:52.837322 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" Dec 11 13:54:53 crc kubenswrapper[4924]: I1211 13:54:53.642377 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" event={"ID":"45ec9ecc-abc3-4150-9b65-192406decbc0","Type":"ContainerStarted","Data":"ac72d8c45879a0f7b6d52e6a201772b87f2580032eece6a1d31e08fcde3a602a"} Dec 11 13:54:53 crc kubenswrapper[4924]: I1211 13:54:53.642473 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" event={"ID":"45ec9ecc-abc3-4150-9b65-192406decbc0","Type":"ContainerStarted","Data":"bbd04ff5de0a60d5e094595eb1bdd69d6bf96f83558897ebce85285566a3429b"} Dec 11 13:54:53 crc kubenswrapper[4924]: I1211 13:54:53.783028 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:53 crc kubenswrapper[4924]: I1211 13:54:53.783050 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:53 crc kubenswrapper[4924]: I1211 13:54:53.783050 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:53 crc kubenswrapper[4924]: E1211 13:54:53.783250 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:53 crc kubenswrapper[4924]: E1211 13:54:53.783468 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:53 crc kubenswrapper[4924]: E1211 13:54:53.783591 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:54 crc kubenswrapper[4924]: I1211 13:54:54.782278 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:54 crc kubenswrapper[4924]: E1211 13:54:54.782523 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:55 crc kubenswrapper[4924]: I1211 13:54:55.782440 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:55 crc kubenswrapper[4924]: I1211 13:54:55.782470 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:55 crc kubenswrapper[4924]: I1211 13:54:55.782531 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:55 crc kubenswrapper[4924]: E1211 13:54:55.783127 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:55 crc kubenswrapper[4924]: E1211 13:54:55.783163 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:55 crc kubenswrapper[4924]: E1211 13:54:55.783595 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:56 crc kubenswrapper[4924]: I1211 13:54:56.782739 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:56 crc kubenswrapper[4924]: E1211 13:54:56.783834 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:57 crc kubenswrapper[4924]: I1211 13:54:57.782303 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:57 crc kubenswrapper[4924]: I1211 13:54:57.782467 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:57 crc kubenswrapper[4924]: E1211 13:54:57.782526 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:57 crc kubenswrapper[4924]: I1211 13:54:57.782319 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:57 crc kubenswrapper[4924]: E1211 13:54:57.782675 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:54:57 crc kubenswrapper[4924]: E1211 13:54:57.782801 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.657654 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/1.log" Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.659116 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/0.log" Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.659191 4924 generic.go:334] "Generic (PLEG): container finished" podID="5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c" containerID="59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c" exitCode=1 Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.659242 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerDied","Data":"59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c"} Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.659314 4924 scope.go:117] "RemoveContainer" containerID="ec6ee181137ef5723b6e7d7f0d406598f13447f7a517904c99c69c551be86f8a" Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.660060 4924 scope.go:117] "RemoveContainer" containerID="59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c" Dec 11 13:54:58 crc kubenswrapper[4924]: E1211 13:54:58.660390 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5vrtp_openshift-multus(5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c)\"" pod="openshift-multus/multus-5vrtp" podUID="5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c" Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.694297 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-666xw" podStartSLOduration=93.694282294 podStartE2EDuration="1m33.694282294s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:54:53.65708789 +0000 UTC m=+107.166568897" watchObservedRunningTime="2025-12-11 13:54:58.694282294 +0000 UTC m=+112.203763271" Dec 11 13:54:58 crc kubenswrapper[4924]: I1211 13:54:58.782927 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:54:58 crc kubenswrapper[4924]: E1211 13:54:58.783053 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:54:59 crc kubenswrapper[4924]: I1211 13:54:59.665493 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/1.log" Dec 11 13:54:59 crc kubenswrapper[4924]: I1211 13:54:59.782048 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:54:59 crc kubenswrapper[4924]: I1211 13:54:59.782202 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:54:59 crc kubenswrapper[4924]: I1211 13:54:59.782048 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:54:59 crc kubenswrapper[4924]: E1211 13:54:59.782422 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:54:59 crc kubenswrapper[4924]: E1211 13:54:59.782634 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:54:59 crc kubenswrapper[4924]: E1211 13:54:59.782785 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:00 crc kubenswrapper[4924]: I1211 13:55:00.783146 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:00 crc kubenswrapper[4924]: E1211 13:55:00.783377 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:01 crc kubenswrapper[4924]: I1211 13:55:01.782716 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:01 crc kubenswrapper[4924]: I1211 13:55:01.782744 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:01 crc kubenswrapper[4924]: I1211 13:55:01.782770 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:01 crc kubenswrapper[4924]: E1211 13:55:01.783216 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:01 crc kubenswrapper[4924]: E1211 13:55:01.783237 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:01 crc kubenswrapper[4924]: E1211 13:55:01.783292 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:01 crc kubenswrapper[4924]: I1211 13:55:01.783576 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 13:55:01 crc kubenswrapper[4924]: E1211 13:55:01.783871 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8jnlw_openshift-ovn-kubernetes(47432eab-9072-43ce-9bf7-0dbd6fa271e7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" Dec 11 13:55:02 crc kubenswrapper[4924]: I1211 13:55:02.783152 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:02 crc kubenswrapper[4924]: E1211 13:55:02.783294 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:03 crc kubenswrapper[4924]: I1211 13:55:03.782986 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:03 crc kubenswrapper[4924]: I1211 13:55:03.783067 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:03 crc kubenswrapper[4924]: E1211 13:55:03.783137 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:03 crc kubenswrapper[4924]: I1211 13:55:03.783234 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:03 crc kubenswrapper[4924]: E1211 13:55:03.783371 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:03 crc kubenswrapper[4924]: E1211 13:55:03.783586 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:04 crc kubenswrapper[4924]: I1211 13:55:04.783072 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:04 crc kubenswrapper[4924]: E1211 13:55:04.783193 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:05 crc kubenswrapper[4924]: I1211 13:55:05.782668 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:05 crc kubenswrapper[4924]: E1211 13:55:05.782793 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:05 crc kubenswrapper[4924]: I1211 13:55:05.782987 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:05 crc kubenswrapper[4924]: E1211 13:55:05.783051 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:05 crc kubenswrapper[4924]: I1211 13:55:05.783188 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:05 crc kubenswrapper[4924]: E1211 13:55:05.783249 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:06 crc kubenswrapper[4924]: E1211 13:55:06.767072 4924 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 11 13:55:06 crc kubenswrapper[4924]: I1211 13:55:06.782135 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:06 crc kubenswrapper[4924]: E1211 13:55:06.784505 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:06 crc kubenswrapper[4924]: E1211 13:55:06.987786 4924 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 13:55:07 crc kubenswrapper[4924]: I1211 13:55:07.782604 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:07 crc kubenswrapper[4924]: I1211 13:55:07.782656 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:07 crc kubenswrapper[4924]: I1211 13:55:07.782733 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:07 crc kubenswrapper[4924]: E1211 13:55:07.782893 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:07 crc kubenswrapper[4924]: E1211 13:55:07.783005 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:07 crc kubenswrapper[4924]: E1211 13:55:07.783080 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:08 crc kubenswrapper[4924]: I1211 13:55:08.783041 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:08 crc kubenswrapper[4924]: E1211 13:55:08.783225 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:09 crc kubenswrapper[4924]: I1211 13:55:09.782654 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:09 crc kubenswrapper[4924]: E1211 13:55:09.782923 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:09 crc kubenswrapper[4924]: I1211 13:55:09.783306 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:09 crc kubenswrapper[4924]: E1211 13:55:09.783502 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:09 crc kubenswrapper[4924]: I1211 13:55:09.784272 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:09 crc kubenswrapper[4924]: E1211 13:55:09.784637 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:10 crc kubenswrapper[4924]: I1211 13:55:10.782015 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:10 crc kubenswrapper[4924]: E1211 13:55:10.782136 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:11 crc kubenswrapper[4924]: I1211 13:55:11.782248 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:11 crc kubenswrapper[4924]: E1211 13:55:11.782450 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:11 crc kubenswrapper[4924]: I1211 13:55:11.782287 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:11 crc kubenswrapper[4924]: E1211 13:55:11.782577 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:11 crc kubenswrapper[4924]: I1211 13:55:11.782269 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:11 crc kubenswrapper[4924]: E1211 13:55:11.782643 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:11 crc kubenswrapper[4924]: E1211 13:55:11.989769 4924 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 13:55:12 crc kubenswrapper[4924]: I1211 13:55:12.783073 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:12 crc kubenswrapper[4924]: E1211 13:55:12.783226 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:13 crc kubenswrapper[4924]: I1211 13:55:13.782841 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:13 crc kubenswrapper[4924]: I1211 13:55:13.782878 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:13 crc kubenswrapper[4924]: E1211 13:55:13.783000 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:13 crc kubenswrapper[4924]: E1211 13:55:13.783182 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:13 crc kubenswrapper[4924]: I1211 13:55:13.783457 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:13 crc kubenswrapper[4924]: I1211 13:55:13.783477 4924 scope.go:117] "RemoveContainer" containerID="59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c" Dec 11 13:55:13 crc kubenswrapper[4924]: E1211 13:55:13.783537 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:14 crc kubenswrapper[4924]: I1211 13:55:14.712844 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/1.log" Dec 11 13:55:14 crc kubenswrapper[4924]: I1211 13:55:14.713463 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerStarted","Data":"4da70e3dffee1c01a4ec9a873590ee5253b2b0924a75980fc26697bf92ddaa41"} Dec 11 13:55:14 crc kubenswrapper[4924]: I1211 13:55:14.782875 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:14 crc kubenswrapper[4924]: E1211 13:55:14.783009 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:15 crc kubenswrapper[4924]: I1211 13:55:15.782262 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:15 crc kubenswrapper[4924]: I1211 13:55:15.782353 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:15 crc kubenswrapper[4924]: I1211 13:55:15.782367 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:15 crc kubenswrapper[4924]: E1211 13:55:15.782427 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:15 crc kubenswrapper[4924]: E1211 13:55:15.782561 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:15 crc kubenswrapper[4924]: E1211 13:55:15.782664 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:15 crc kubenswrapper[4924]: I1211 13:55:15.784055 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.475007 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-79mv2"] Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.720839 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/3.log" Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.723850 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerStarted","Data":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.723907 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:16 crc kubenswrapper[4924]: E1211 13:55:16.724022 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.724355 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.749044 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podStartSLOduration=111.749025642 podStartE2EDuration="1m51.749025642s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:16.747392655 +0000 UTC m=+130.256873642" watchObservedRunningTime="2025-12-11 13:55:16.749025642 +0000 UTC m=+130.258506629" Dec 11 13:55:16 crc kubenswrapper[4924]: I1211 13:55:16.782226 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:16 crc kubenswrapper[4924]: E1211 13:55:16.784486 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:16 crc kubenswrapper[4924]: E1211 13:55:16.991347 4924 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 13:55:17 crc kubenswrapper[4924]: I1211 13:55:17.782235 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:17 crc kubenswrapper[4924]: E1211 13:55:17.782715 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:17 crc kubenswrapper[4924]: I1211 13:55:17.782273 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:17 crc kubenswrapper[4924]: E1211 13:55:17.782903 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:18 crc kubenswrapper[4924]: I1211 13:55:18.782504 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:18 crc kubenswrapper[4924]: I1211 13:55:18.782617 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:18 crc kubenswrapper[4924]: E1211 13:55:18.782692 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:18 crc kubenswrapper[4924]: E1211 13:55:18.782919 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:19 crc kubenswrapper[4924]: I1211 13:55:19.781933 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:19 crc kubenswrapper[4924]: E1211 13:55:19.782295 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:19 crc kubenswrapper[4924]: I1211 13:55:19.782020 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:19 crc kubenswrapper[4924]: E1211 13:55:19.782649 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:20 crc kubenswrapper[4924]: I1211 13:55:20.294092 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 13:55:20 crc kubenswrapper[4924]: I1211 13:55:20.782052 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:20 crc kubenswrapper[4924]: I1211 13:55:20.782064 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:20 crc kubenswrapper[4924]: E1211 13:55:20.782210 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:55:20 crc kubenswrapper[4924]: E1211 13:55:20.782311 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-79mv2" podUID="39f08493-e794-4e97-bc69-8faa67a120b8" Dec 11 13:55:21 crc kubenswrapper[4924]: I1211 13:55:21.782572 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:21 crc kubenswrapper[4924]: I1211 13:55:21.782679 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:21 crc kubenswrapper[4924]: E1211 13:55:21.782715 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:55:21 crc kubenswrapper[4924]: E1211 13:55:21.782923 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:55:22 crc kubenswrapper[4924]: I1211 13:55:22.782204 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:22 crc kubenswrapper[4924]: I1211 13:55:22.782277 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:22 crc kubenswrapper[4924]: I1211 13:55:22.784548 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 13:55:22 crc kubenswrapper[4924]: I1211 13:55:22.785220 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 13:55:22 crc kubenswrapper[4924]: I1211 13:55:22.785260 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 13:55:22 crc kubenswrapper[4924]: I1211 13:55:22.786221 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.321382 4924 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.369775 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nf4pv"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.370480 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.380000 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.384884 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.386811 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6cqwz"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.388180 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.398475 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.398646 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.398742 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.398657 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.398768 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.399027 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.399189 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.399507 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.402577 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.402838 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.405445 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.406062 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.406916 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.418140 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.418642 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4gbtq"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.419014 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.419474 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.419744 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.421108 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9dvjv"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.421451 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.421788 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.421875 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.424265 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.424508 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.424548 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.424751 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.424833 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.425041 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.425184 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.425665 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.425682 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426025 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426067 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426222 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426282 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426397 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426597 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.426900 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.427061 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.427157 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.427265 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.431705 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qqvrl"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.432499 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.433313 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.434063 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.436481 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.436981 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.437233 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j44sk"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.437800 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.444512 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.445543 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.445736 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.445910 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.448870 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.449563 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.449659 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.449666 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.450278 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.451518 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.451684 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.452475 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453422 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453572 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453785 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453894 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.454024 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.454123 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.454217 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453589 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.454385 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453634 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.454577 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.453804 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.456252 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.471489 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.471543 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.471489 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.471961 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.472219 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.472498 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.472883 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.472984 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.476418 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.476716 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.476772 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.476907 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.476980 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.477085 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.477151 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.477311 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.477512 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.477876 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.478024 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.485509 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.495502 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-etcd-serving-ca\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.495743 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e529814-a09b-4dff-b79d-5525a16ce269-node-pullsecrets\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.495880 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-audit\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.495983 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-serving-cert\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496082 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496181 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsznq\" (UniqueName: \"kubernetes.io/projected/7e529814-a09b-4dff-b79d-5525a16ce269-kube-api-access-qsznq\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496284 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e529814-a09b-4dff-b79d-5525a16ce269-audit-dir\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496424 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-config\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496527 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-etcd-client\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496621 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-encryption-config\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.496737 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-image-import-ca\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.498594 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dd9cn"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.499301 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.499801 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.500247 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.500472 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pb2zr"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.500944 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.503430 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.503987 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504197 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504512 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504841 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwnfk"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.508650 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504859 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504885 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504931 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.504963 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.505042 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.505079 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.511860 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.512386 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.512844 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.513146 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.517109 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.517352 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.517402 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.517821 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.517902 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.518043 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.517364 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.518510 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.518754 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-25w25"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.519136 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.520403 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.520767 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.522892 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nf4pv"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.523213 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.523834 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.524299 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.525257 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.525812 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.526436 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.527078 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.527505 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.530697 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.531757 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.532617 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.532979 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.533696 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.534242 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.534353 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.534947 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.541949 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.541999 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qqvrl"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.542024 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j44sk"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.551113 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g462z"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.551689 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mcpjh"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.552059 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.552077 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.552819 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.553494 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.554758 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.562701 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.563390 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.563787 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6cqwz"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.565115 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.567135 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.573612 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.575359 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.576260 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.576856 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4gbtq"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.578195 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.584924 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nvxls"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.586123 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.586767 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9dvjv"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.588125 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.588633 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.589394 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.590686 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.590856 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dvnc9"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.591270 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.592117 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.592645 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.593176 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.593519 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.594061 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.595181 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.596718 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p7n6k"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.597553 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e85af7-7db3-49b8-8a77-77f4f5783916-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.597587 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.597593 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-config\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.597621 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zcb\" (UniqueName: \"kubernetes.io/projected/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-kube-api-access-87zcb\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.597645 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599037 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvn8\" (UniqueName: \"kubernetes.io/projected/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-kube-api-access-zxvn8\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599081 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-trusted-ca-bundle\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599107 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82848f0a-6b6c-442a-99b7-6db2a8076fee-serving-cert\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599139 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-etcd-client\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599187 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-encryption-config\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599227 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsg59\" (UniqueName: \"kubernetes.io/projected/30e85af7-7db3-49b8-8a77-77f4f5783916-kube-api-access-hsg59\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599363 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-serving-cert\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599411 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16d4f535-7844-45b1-8a32-1b786e5f1b89-audit-dir\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599459 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbq5d\" (UniqueName: \"kubernetes.io/projected/16d4f535-7844-45b1-8a32-1b786e5f1b89-kube-api-access-pbq5d\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599484 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/71f847e8-3ec3-43fb-a33e-6c71a867e160-machine-approver-tls\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599511 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tf2z\" (UniqueName: \"kubernetes.io/projected/4c3c7d59-0131-4a77-9828-7a78ff18a8ab-kube-api-access-8tf2z\") pod \"downloads-7954f5f757-9dvjv\" (UID: \"4c3c7d59-0131-4a77-9828-7a78ff18a8ab\") " pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599596 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-etcd-client\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599630 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjp6p\" (UniqueName: \"kubernetes.io/projected/b96c821c-a977-4a01-91d1-9c7df59ae49b-kube-api-access-zjp6p\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599657 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mqj\" (UniqueName: \"kubernetes.io/projected/71f847e8-3ec3-43fb-a33e-6c71a867e160-kube-api-access-48mqj\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599700 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-config\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599726 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-audit\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599749 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82848f0a-6b6c-442a-99b7-6db2a8076fee-trusted-ca\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599774 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599873 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71f847e8-3ec3-43fb-a33e-6c71a867e160-auth-proxy-config\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.599959 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f847e8-3ec3-43fb-a33e-6c71a867e160-config\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600020 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-oauth-config\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600116 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-config\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600165 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-config\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600221 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600254 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-config\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600295 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgnr\" (UniqueName: \"kubernetes.io/projected/c07d7a97-69d8-4feb-8cd6-6cbbd40cb057-kube-api-access-kbgnr\") pod \"migrator-59844c95c7-g7gp9\" (UID: \"c07d7a97-69d8-4feb-8cd6-6cbbd40cb057\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600396 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-serving-cert\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600423 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpbzc\" (UniqueName: \"kubernetes.io/projected/b94a998f-0317-4fb2-9633-c68c86337a93-kube-api-access-jpbzc\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600447 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-oauth-serving-cert\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600483 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600613 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-audit-policies\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600695 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-image-import-ca\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600730 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-service-ca\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600789 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767wh\" (UniqueName: \"kubernetes.io/projected/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-kube-api-access-767wh\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600924 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b94a998f-0317-4fb2-9633-c68c86337a93-serving-cert\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.600967 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-serving-cert\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601003 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/563b2379-6f6c-4604-90e7-786d71191a32-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n9z8x\" (UID: \"563b2379-6f6c-4604-90e7-786d71191a32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601022 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-client-ca\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601062 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x79dx\" (UniqueName: \"kubernetes.io/projected/82848f0a-6b6c-442a-99b7-6db2a8076fee-kube-api-access-x79dx\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601093 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-client-ca\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601112 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ckc\" (UniqueName: \"kubernetes.io/projected/563b2379-6f6c-4604-90e7-786d71191a32-kube-api-access-f6ckc\") pod \"control-plane-machine-set-operator-78cbb6b69f-n9z8x\" (UID: \"563b2379-6f6c-4604-90e7-786d71191a32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601129 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbd3bd95-adc7-417f-8128-e85165f8c2af-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-twjdt\" (UID: \"cbd3bd95-adc7-417f-8128-e85165f8c2af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601172 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-etcd-serving-ca\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601516 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96c821c-a977-4a01-91d1-9c7df59ae49b-serving-cert\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601556 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-ca\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601583 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601620 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82848f0a-6b6c-442a-99b7-6db2a8076fee-config\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601647 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e529814-a09b-4dff-b79d-5525a16ce269-node-pullsecrets\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601787 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-config\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.601938 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pb2zr"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.602057 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7e529814-a09b-4dff-b79d-5525a16ce269-node-pullsecrets\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.602493 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-trusted-ca-bundle\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.603055 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-j96gb"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.603707 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-etcd-serving-ca\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.603853 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-image-import-ca\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604073 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhhq\" (UniqueName: \"kubernetes.io/projected/cbd3bd95-adc7-417f-8128-e85165f8c2af-kube-api-access-cwhhq\") pod \"cluster-samples-operator-665b6dd947-twjdt\" (UID: \"cbd3bd95-adc7-417f-8128-e85165f8c2af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604130 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-serving-cert\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604158 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-service-ca\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604181 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604202 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsznq\" (UniqueName: \"kubernetes.io/projected/7e529814-a09b-4dff-b79d-5525a16ce269-kube-api-access-qsznq\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604218 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30e85af7-7db3-49b8-8a77-77f4f5783916-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604241 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e529814-a09b-4dff-b79d-5525a16ce269-audit-dir\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604262 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-client\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604400 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7e529814-a09b-4dff-b79d-5525a16ce269-audit-dir\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.604567 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-encryption-config\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.606053 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7e529814-a09b-4dff-b79d-5525a16ce269-audit\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.607424 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.607510 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.610017 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.610985 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-serving-cert\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.611384 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.612010 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.612658 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-encryption-config\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.614746 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.615837 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.616070 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.616092 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.616614 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7e529814-a09b-4dff-b79d-5525a16ce269-etcd-client\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.618281 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dd9cn"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.620391 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.620441 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nvxls"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.621567 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.623239 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.624604 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g462z"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.625510 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j96gb"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.626523 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-25w25"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.628892 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-45h24"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.629561 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.630201 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tl6gg"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.630793 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.631427 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.632477 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.632830 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.633515 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwnfk"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.634532 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.635529 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.636549 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.643254 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.645143 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-45h24"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.647846 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.649446 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.650953 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dvnc9"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.652629 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.652936 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.654191 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.655748 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.657504 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p7n6k"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.659019 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tvrr4"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.660230 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.660584 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tvrr4"] Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.673425 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.693432 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705786 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30e85af7-7db3-49b8-8a77-77f4f5783916-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705831 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-client\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705849 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-encryption-config\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705871 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/488b109d-f524-4b78-a1a9-d07a1178236d-images\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705886 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a36823c9-41a3-4eb3-9e85-634f33d00f85-metrics-tls\") pod \"dns-operator-744455d44c-pb2zr\" (UID: \"a36823c9-41a3-4eb3-9e85-634f33d00f85\") " pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705920 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e85af7-7db3-49b8-8a77-77f4f5783916-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705937 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338bc6d6-6434-4f98-bc51-e5aaa047dd58-serving-cert\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.705954 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcvh\" (UniqueName: \"kubernetes.io/projected/4fb40320-55f0-4b7c-9943-29a8abdf5943-kube-api-access-6dcvh\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706517 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-config\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706549 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zcb\" (UniqueName: \"kubernetes.io/projected/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-kube-api-access-87zcb\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706575 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706597 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvn8\" (UniqueName: \"kubernetes.io/projected/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-kube-api-access-zxvn8\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706645 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-trusted-ca-bundle\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706679 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82848f0a-6b6c-442a-99b7-6db2a8076fee-serving-cert\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706704 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa27a09-ad13-4e29-8687-aab1bb1eb438-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706725 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fb40320-55f0-4b7c-9943-29a8abdf5943-service-ca-bundle\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706745 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsg59\" (UniqueName: \"kubernetes.io/projected/30e85af7-7db3-49b8-8a77-77f4f5783916-kube-api-access-hsg59\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706763 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-serving-cert\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706780 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16d4f535-7844-45b1-8a32-1b786e5f1b89-audit-dir\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706804 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbq5d\" (UniqueName: \"kubernetes.io/projected/16d4f535-7844-45b1-8a32-1b786e5f1b89-kube-api-access-pbq5d\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706822 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/71f847e8-3ec3-43fb-a33e-6c71a867e160-machine-approver-tls\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706836 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tf2z\" (UniqueName: \"kubernetes.io/projected/4c3c7d59-0131-4a77-9828-7a78ff18a8ab-kube-api-access-8tf2z\") pod \"downloads-7954f5f757-9dvjv\" (UID: \"4c3c7d59-0131-4a77-9828-7a78ff18a8ab\") " pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706851 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-metrics-certs\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706869 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-etcd-client\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706886 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bfa27a09-ad13-4e29-8687-aab1bb1eb438-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706904 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjp6p\" (UniqueName: \"kubernetes.io/projected/b96c821c-a977-4a01-91d1-9c7df59ae49b-kube-api-access-zjp6p\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706921 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mqj\" (UniqueName: \"kubernetes.io/projected/71f847e8-3ec3-43fb-a33e-6c71a867e160-kube-api-access-48mqj\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706938 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706953 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706970 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-metrics-tls\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706985 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/488b109d-f524-4b78-a1a9-d07a1178236d-config\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706999 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-stats-auth\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707023 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-config\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707039 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82848f0a-6b6c-442a-99b7-6db2a8076fee-trusted-ca\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707056 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn6l7\" (UniqueName: \"kubernetes.io/projected/bfa27a09-ad13-4e29-8687-aab1bb1eb438-kube-api-access-tn6l7\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707088 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/488b109d-f524-4b78-a1a9-d07a1178236d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707106 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-oauth-config\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707124 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71f847e8-3ec3-43fb-a33e-6c71a867e160-auth-proxy-config\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707142 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f847e8-3ec3-43fb-a33e-6c71a867e160-config\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707159 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-config\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707174 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-config\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707190 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-service-ca-bundle\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707207 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707230 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707247 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-profile-collector-cert\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707265 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbgnr\" (UniqueName: \"kubernetes.io/projected/c07d7a97-69d8-4feb-8cd6-6cbbd40cb057-kube-api-access-kbgnr\") pod \"migrator-59844c95c7-g7gp9\" (UID: \"c07d7a97-69d8-4feb-8cd6-6cbbd40cb057\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707279 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67kw8\" (UniqueName: \"kubernetes.io/projected/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-kube-api-access-67kw8\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707296 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-serving-cert\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707313 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrxb\" (UniqueName: \"kubernetes.io/projected/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-kube-api-access-mcrxb\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707358 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpbzc\" (UniqueName: \"kubernetes.io/projected/b94a998f-0317-4fb2-9633-c68c86337a93-kube-api-access-jpbzc\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707376 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-oauth-serving-cert\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707393 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707410 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-audit-policies\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707427 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-trusted-ca\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707443 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzcn\" (UniqueName: \"kubernetes.io/projected/a36823c9-41a3-4eb3-9e85-634f33d00f85-kube-api-access-7gzcn\") pod \"dns-operator-744455d44c-pb2zr\" (UID: \"a36823c9-41a3-4eb3-9e85-634f33d00f85\") " pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707465 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-service-ca\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707480 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767wh\" (UniqueName: \"kubernetes.io/projected/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-kube-api-access-767wh\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707499 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-client-ca\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707517 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b94a998f-0317-4fb2-9633-c68c86337a93-serving-cert\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707532 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-serving-cert\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707552 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/563b2379-6f6c-4604-90e7-786d71191a32-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n9z8x\" (UID: \"563b2379-6f6c-4604-90e7-786d71191a32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707588 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x79dx\" (UniqueName: \"kubernetes.io/projected/82848f0a-6b6c-442a-99b7-6db2a8076fee-kube-api-access-x79dx\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707614 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707632 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.707086 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-available-featuregates\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.706800 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30e85af7-7db3-49b8-8a77-77f4f5783916-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708096 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-trusted-ca-bundle\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708362 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbd3bd95-adc7-417f-8128-e85165f8c2af-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-twjdt\" (UID: \"cbd3bd95-adc7-417f-8128-e85165f8c2af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708393 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-client-ca\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708420 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6ckc\" (UniqueName: \"kubernetes.io/projected/563b2379-6f6c-4604-90e7-786d71191a32-kube-api-access-f6ckc\") pod \"control-plane-machine-set-operator-78cbb6b69f-n9z8x\" (UID: \"563b2379-6f6c-4604-90e7-786d71191a32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708452 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh52x\" (UniqueName: \"kubernetes.io/projected/338bc6d6-6434-4f98-bc51-e5aaa047dd58-kube-api-access-dh52x\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708476 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfa27a09-ad13-4e29-8687-aab1bb1eb438-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708504 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96c821c-a977-4a01-91d1-9c7df59ae49b-serving-cert\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708527 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-ca\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708550 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708578 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82848f0a-6b6c-442a-99b7-6db2a8076fee-config\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708602 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-config\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708630 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhhq\" (UniqueName: \"kubernetes.io/projected/cbd3bd95-adc7-417f-8128-e85165f8c2af-kube-api-access-cwhhq\") pod \"cluster-samples-operator-665b6dd947-twjdt\" (UID: \"cbd3bd95-adc7-417f-8128-e85165f8c2af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708665 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16d4f535-7844-45b1-8a32-1b786e5f1b89-audit-dir\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.708723 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-config\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.709016 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-config\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.709078 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-oauth-serving-cert\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.709630 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.709709 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71f847e8-3ec3-43fb-a33e-6c71a867e160-auth-proxy-config\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.709902 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/82848f0a-6b6c-442a-99b7-6db2a8076fee-trusted-ca\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.709994 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-encryption-config\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710185 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-client-ca\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710375 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-client\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710381 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710598 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-audit-policies\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710853 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71f847e8-3ec3-43fb-a33e-6c71a867e160-config\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710902 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-ca\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.710983 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82848f0a-6b6c-442a-99b7-6db2a8076fee-serving-cert\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711270 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-etcd-service-ca\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711312 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-serving-cert\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711418 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16d4f535-7844-45b1-8a32-1b786e5f1b89-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711619 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-client-ca\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711717 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-service-ca\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711767 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-srv-cert\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711783 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbd3bd95-adc7-417f-8128-e85165f8c2af-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-twjdt\" (UID: \"cbd3bd95-adc7-417f-8128-e85165f8c2af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711805 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-default-certificate\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.711929 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/71f847e8-3ec3-43fb-a33e-6c71a867e160-machine-approver-tls\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.712284 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82848f0a-6b6c-442a-99b7-6db2a8076fee-config\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.712361 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.712385 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjt5\" (UniqueName: \"kubernetes.io/projected/488b109d-f524-4b78-a1a9-d07a1178236d-kube-api-access-hjjt5\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.712777 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-config\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.713029 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-service-ca\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.713439 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.713503 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-config\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.713821 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30e85af7-7db3-49b8-8a77-77f4f5783916-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.714501 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b94a998f-0317-4fb2-9633-c68c86337a93-serving-cert\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.714515 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-serving-cert\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.715296 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96c821c-a977-4a01-91d1-9c7df59ae49b-serving-cert\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.715575 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-serving-cert\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.715809 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-console-oauth-config\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.716113 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-etcd-client\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.732842 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16d4f535-7844-45b1-8a32-1b786e5f1b89-serving-cert\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.733207 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.753757 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.773236 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.782577 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.782603 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.793682 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813360 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813458 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrxb\" (UniqueName: \"kubernetes.io/projected/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-kube-api-access-mcrxb\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813493 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-trusted-ca\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813512 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzcn\" (UniqueName: \"kubernetes.io/projected/a36823c9-41a3-4eb3-9e85-634f33d00f85-kube-api-access-7gzcn\") pod \"dns-operator-744455d44c-pb2zr\" (UID: \"a36823c9-41a3-4eb3-9e85-634f33d00f85\") " pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813559 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813574 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813594 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh52x\" (UniqueName: \"kubernetes.io/projected/338bc6d6-6434-4f98-bc51-e5aaa047dd58-kube-api-access-dh52x\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813612 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfa27a09-ad13-4e29-8687-aab1bb1eb438-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813632 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-config\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813653 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-srv-cert\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813668 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-default-certificate\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813685 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjt5\" (UniqueName: \"kubernetes.io/projected/488b109d-f524-4b78-a1a9-d07a1178236d-kube-api-access-hjjt5\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813724 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/488b109d-f524-4b78-a1a9-d07a1178236d-images\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813748 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a36823c9-41a3-4eb3-9e85-634f33d00f85-metrics-tls\") pod \"dns-operator-744455d44c-pb2zr\" (UID: \"a36823c9-41a3-4eb3-9e85-634f33d00f85\") " pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813783 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338bc6d6-6434-4f98-bc51-e5aaa047dd58-serving-cert\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813799 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcvh\" (UniqueName: \"kubernetes.io/projected/4fb40320-55f0-4b7c-9943-29a8abdf5943-kube-api-access-6dcvh\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813829 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa27a09-ad13-4e29-8687-aab1bb1eb438-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813844 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fb40320-55f0-4b7c-9943-29a8abdf5943-service-ca-bundle\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813882 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-metrics-certs\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813904 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bfa27a09-ad13-4e29-8687-aab1bb1eb438-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813931 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813947 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813964 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-metrics-tls\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.813983 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/488b109d-f524-4b78-a1a9-d07a1178236d-config\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814013 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-stats-auth\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814053 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn6l7\" (UniqueName: \"kubernetes.io/projected/bfa27a09-ad13-4e29-8687-aab1bb1eb438-kube-api-access-tn6l7\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814080 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/488b109d-f524-4b78-a1a9-d07a1178236d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814101 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-service-ca-bundle\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814117 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814142 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-profile-collector-cert\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.814164 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67kw8\" (UniqueName: \"kubernetes.io/projected/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-kube-api-access-67kw8\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.815230 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.815620 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-service-ca-bundle\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.815837 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/338bc6d6-6434-4f98-bc51-e5aaa047dd58-config\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.816809 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/488b109d-f524-4b78-a1a9-d07a1178236d-config\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.817866 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/488b109d-f524-4b78-a1a9-d07a1178236d-images\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.818109 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-profile-collector-cert\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.818388 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfa27a09-ad13-4e29-8687-aab1bb1eb438-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.818484 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.818859 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bfa27a09-ad13-4e29-8687-aab1bb1eb438-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.818984 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.819002 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a36823c9-41a3-4eb3-9e85-634f33d00f85-metrics-tls\") pod \"dns-operator-744455d44c-pb2zr\" (UID: \"a36823c9-41a3-4eb3-9e85-634f33d00f85\") " pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.819163 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/338bc6d6-6434-4f98-bc51-e5aaa047dd58-serving-cert\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.820000 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-srv-cert\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.820988 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/488b109d-f524-4b78-a1a9-d07a1178236d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.832464 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.852844 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.872827 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.892957 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.933475 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.952993 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.973142 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 13:55:23 crc kubenswrapper[4924]: I1211 13:55:23.992944 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.013206 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.033737 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.061387 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.073987 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.080568 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-default-certificate\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.094160 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.099546 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-stats-auth\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.112670 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.120034 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4fb40320-55f0-4b7c-9943-29a8abdf5943-metrics-certs\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.133116 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.136042 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fb40320-55f0-4b7c-9943-29a8abdf5943-service-ca-bundle\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.152671 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.173388 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.194161 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.213267 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.233881 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.254361 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.273797 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.293106 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.300782 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-metrics-tls\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.322079 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.326308 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-trusted-ca\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.334404 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.353315 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.373794 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.393525 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.413405 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.433454 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.453369 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.474452 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.493253 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.514271 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.527534 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/563b2379-6f6c-4604-90e7-786d71191a32-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-n9z8x\" (UID: \"563b2379-6f6c-4604-90e7-786d71191a32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.533662 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.553655 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.581294 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.591638 4924 request.go:700] Waited for 1.000095409s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&limit=500&resourceVersion=0 Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.593467 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.613677 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.634513 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.653313 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.673262 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.693116 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.714863 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.733385 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.753708 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.773525 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.796255 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.813105 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.834435 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.853446 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.882996 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.893190 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.913923 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.953621 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.960836 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.976838 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 13:55:24 crc kubenswrapper[4924]: I1211 13:55:24.993523 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.013103 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.033447 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.054166 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.073590 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.093221 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.112816 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.133686 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.187812 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsznq\" (UniqueName: \"kubernetes.io/projected/7e529814-a09b-4dff-b79d-5525a16ce269-kube-api-access-qsznq\") pod \"apiserver-76f77b778f-nf4pv\" (UID: \"7e529814-a09b-4dff-b79d-5525a16ce269\") " pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.193080 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.213315 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.225101 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.232982 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.253601 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.273842 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.293381 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.313881 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.333489 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.353010 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.372951 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.392918 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.401121 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-nf4pv"] Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.412452 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.432761 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.453296 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.472938 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.493301 4924 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.512965 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.547606 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zcb\" (UniqueName: \"kubernetes.io/projected/4632c4d1-bc4e-41f4-89e8-4702ab9397c7-kube-api-access-87zcb\") pod \"console-f9d7485db-4gbtq\" (UID: \"4632c4d1-bc4e-41f4-89e8-4702ab9397c7\") " pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.565943 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvn8\" (UniqueName: \"kubernetes.io/projected/c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34-kube-api-access-zxvn8\") pod \"openshift-config-operator-7777fb866f-wzsqk\" (UID: \"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.587161 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mqj\" (UniqueName: \"kubernetes.io/projected/71f847e8-3ec3-43fb-a33e-6c71a867e160-kube-api-access-48mqj\") pod \"machine-approver-56656f9798-t2zg5\" (UID: \"71f847e8-3ec3-43fb-a33e-6c71a867e160\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.591976 4924 request.go:700] Waited for 1.884458895s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.607421 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsg59\" (UniqueName: \"kubernetes.io/projected/30e85af7-7db3-49b8-8a77-77f4f5783916-kube-api-access-hsg59\") pod \"openshift-controller-manager-operator-756b6f6bc6-6tc2t\" (UID: \"30e85af7-7db3-49b8-8a77-77f4f5783916\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.629464 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tf2z\" (UniqueName: \"kubernetes.io/projected/4c3c7d59-0131-4a77-9828-7a78ff18a8ab-kube-api-access-8tf2z\") pod \"downloads-7954f5f757-9dvjv\" (UID: \"4c3c7d59-0131-4a77-9828-7a78ff18a8ab\") " pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.639110 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.648166 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbq5d\" (UniqueName: \"kubernetes.io/projected/16d4f535-7844-45b1-8a32-1b786e5f1b89-kube-api-access-pbq5d\") pod \"apiserver-7bbb656c7d-77gwr\" (UID: \"16d4f535-7844-45b1-8a32-1b786e5f1b89\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.667929 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.669887 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpbzc\" (UniqueName: \"kubernetes.io/projected/b94a998f-0317-4fb2-9633-c68c86337a93-kube-api-access-jpbzc\") pod \"controller-manager-879f6c89f-6cqwz\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.679155 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.687600 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.691465 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767wh\" (UniqueName: \"kubernetes.io/projected/7ae622b9-9a50-4f4c-8bf0-e845579c5f44-kube-api-access-767wh\") pod \"etcd-operator-b45778765-j44sk\" (UID: \"7ae622b9-9a50-4f4c-8bf0-e845579c5f44\") " pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:25 crc kubenswrapper[4924]: W1211 13:55:25.704484 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71f847e8_3ec3_43fb_a33e_6c71a867e160.slice/crio-cb9436f3b54edc6bc5b15c2570b524d898574bae970224161cc8ec6410ba87d5 WatchSource:0}: Error finding container cb9436f3b54edc6bc5b15c2570b524d898574bae970224161cc8ec6410ba87d5: Status 404 returned error can't find the container with id cb9436f3b54edc6bc5b15c2570b524d898574bae970224161cc8ec6410ba87d5 Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.705589 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.712237 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjp6p\" (UniqueName: \"kubernetes.io/projected/b96c821c-a977-4a01-91d1-9c7df59ae49b-kube-api-access-zjp6p\") pod \"route-controller-manager-6576b87f9c-sc25c\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.725764 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.735346 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbgnr\" (UniqueName: \"kubernetes.io/projected/c07d7a97-69d8-4feb-8cd6-6cbbd40cb057-kube-api-access-kbgnr\") pod \"migrator-59844c95c7-g7gp9\" (UID: \"c07d7a97-69d8-4feb-8cd6-6cbbd40cb057\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.749907 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhhq\" (UniqueName: \"kubernetes.io/projected/cbd3bd95-adc7-417f-8128-e85165f8c2af-kube-api-access-cwhhq\") pod \"cluster-samples-operator-665b6dd947-twjdt\" (UID: \"cbd3bd95-adc7-417f-8128-e85165f8c2af\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.763172 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" event={"ID":"7e529814-a09b-4dff-b79d-5525a16ce269","Type":"ContainerStarted","Data":"f3a439d06a15b08d392474eff8200f7bd319473dc4af7fd4387adee3a46298ed"} Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.773046 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6ckc\" (UniqueName: \"kubernetes.io/projected/563b2379-6f6c-4604-90e7-786d71191a32-kube-api-access-f6ckc\") pod \"control-plane-machine-set-operator-78cbb6b69f-n9z8x\" (UID: \"563b2379-6f6c-4604-90e7-786d71191a32\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.790659 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x79dx\" (UniqueName: \"kubernetes.io/projected/82848f0a-6b6c-442a-99b7-6db2a8076fee-kube-api-access-x79dx\") pod \"console-operator-58897d9998-qqvrl\" (UID: \"82848f0a-6b6c-442a-99b7-6db2a8076fee\") " pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.794548 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.802578 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.813904 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.849340 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrxb\" (UniqueName: \"kubernetes.io/projected/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-kube-api-access-mcrxb\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.868648 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67kw8\" (UniqueName: \"kubernetes.io/projected/4cb1b5d8-52a1-4274-9ccc-45ac4533481d-kube-api-access-67kw8\") pod \"catalog-operator-68c6474976-xc4q4\" (UID: \"4cb1b5d8-52a1-4274-9ccc-45ac4533481d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.873573 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.878145 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4gbtq"] Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.891939 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzcn\" (UniqueName: \"kubernetes.io/projected/a36823c9-41a3-4eb3-9e85-634f33d00f85-kube-api-access-7gzcn\") pod \"dns-operator-744455d44c-pb2zr\" (UID: \"a36823c9-41a3-4eb3-9e85-634f33d00f85\") " pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.899087 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t"] Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.903019 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.908248 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9dvjv"] Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.909864 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqvgm\" (UID: \"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.917427 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.925165 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh52x\" (UniqueName: \"kubernetes.io/projected/338bc6d6-6434-4f98-bc51-e5aaa047dd58-kube-api-access-dh52x\") pod \"authentication-operator-69f744f599-25w25\" (UID: \"338bc6d6-6434-4f98-bc51-e5aaa047dd58\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.925420 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.946119 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74f2dd5a-bc29-4b79-bccb-28c4b32f1947-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jq8sj\" (UID: \"74f2dd5a-bc29-4b79-bccb-28c4b32f1947\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.956083 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.963622 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bfa27a09-ad13-4e29-8687-aab1bb1eb438-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.987802 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcvh\" (UniqueName: \"kubernetes.io/projected/4fb40320-55f0-4b7c-9943-29a8abdf5943-kube-api-access-6dcvh\") pod \"router-default-5444994796-mcpjh\" (UID: \"4fb40320-55f0-4b7c-9943-29a8abdf5943\") " pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:25 crc kubenswrapper[4924]: I1211 13:55:25.995642 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.010080 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjt5\" (UniqueName: \"kubernetes.io/projected/488b109d-f524-4b78-a1a9-d07a1178236d-kube-api-access-hjjt5\") pod \"machine-api-operator-5694c8668f-dd9cn\" (UID: \"488b109d-f524-4b78-a1a9-d07a1178236d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.038072 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn6l7\" (UniqueName: \"kubernetes.io/projected/bfa27a09-ad13-4e29-8687-aab1bb1eb438-kube-api-access-tn6l7\") pod \"cluster-image-registry-operator-dc59b4c8b-vcfc9\" (UID: \"bfa27a09-ad13-4e29-8687-aab1bb1eb438\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.041505 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.047943 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.222488 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.222950 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.223202 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.223575 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.224111 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225152 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxvfg\" (UniqueName: \"kubernetes.io/projected/2fe39169-62e5-4364-bf39-1bd6cccbf231-kube-api-access-nxvfg\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225195 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/167e3306-54e1-470a-a7d6-55b2742ca45e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225215 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f3dc288-a9c5-40b3-9170-52c8095c515c-webhook-cert\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225231 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2fe39169-62e5-4364-bf39-1bd6cccbf231-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225246 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8dee4c98-7d07-4923-a8e5-47d322cc35d6-signing-cabundle\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225269 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b3a192-0bab-4362-8737-f4725a4ff976-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v7bhg\" (UID: \"d4b3a192-0bab-4362-8737-f4725a4ff976\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225290 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jqd2\" (UniqueName: \"kubernetes.io/projected/8dee4c98-7d07-4923-a8e5-47d322cc35d6-kube-api-access-9jqd2\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225314 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f3dc288-a9c5-40b3-9170-52c8095c515c-apiservice-cert\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225357 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/167e3306-54e1-470a-a7d6-55b2742ca45e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225384 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225401 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8n9z\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-kube-api-access-q8n9z\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225426 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-certificates\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225441 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3f3dc288-a9c5-40b3-9170-52c8095c515c-tmpfs\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225455 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn2jd\" (UniqueName: \"kubernetes.io/projected/d4b3a192-0bab-4362-8737-f4725a4ff976-kube-api-access-sn2jd\") pod \"package-server-manager-789f6589d5-v7bhg\" (UID: \"d4b3a192-0bab-4362-8737-f4725a4ff976\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225471 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcmhl\" (UniqueName: \"kubernetes.io/projected/3f3dc288-a9c5-40b3-9170-52c8095c515c-kube-api-access-xcmhl\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225500 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-trusted-ca\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225518 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-tls\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225540 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8dee4c98-7d07-4923-a8e5-47d322cc35d6-signing-key\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225571 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2fe39169-62e5-4364-bf39-1bd6cccbf231-proxy-tls\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.225602 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-bound-sa-token\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.225889 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:26.725861335 +0000 UTC m=+140.235342412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: W1211 13:55:26.241725 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30e85af7_7db3_49b8_8a77_77f4f5783916.slice/crio-993f0b97bfea8039e4d731da05e1022a5ed33a4788f89168c20b844a31c7294b WatchSource:0}: Error finding container 993f0b97bfea8039e4d731da05e1022a5ed33a4788f89168c20b844a31c7294b: Status 404 returned error can't find the container with id 993f0b97bfea8039e4d731da05e1022a5ed33a4788f89168c20b844a31c7294b Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.326708 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.326884 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.326914 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8n9z\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-kube-api-access-q8n9z\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.326944 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qhs\" (UniqueName: \"kubernetes.io/projected/487ceb97-cea5-4120-900d-dcd405f2f574-kube-api-access-49qhs\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.326964 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3f3dc288-a9c5-40b3-9170-52c8095c515c-tmpfs\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.326990 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-serving-cert\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327013 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kgk\" (UniqueName: \"kubernetes.io/projected/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-kube-api-access-x8kgk\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327049 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8dee4c98-7d07-4923-a8e5-47d322cc35d6-signing-key\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327075 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2fe39169-62e5-4364-bf39-1bd6cccbf231-proxy-tls\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327100 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7wn\" (UniqueName: \"kubernetes.io/projected/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-kube-api-access-8q7wn\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327116 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/487ceb97-cea5-4120-900d-dcd405f2f574-proxy-tls\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327142 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-bound-sa-token\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327157 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f12434db-4aac-48f2-9911-5c04ad6b461d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327183 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvhj\" (UniqueName: \"kubernetes.io/projected/f21d0ab3-26d3-4991-bc1f-28b90a55098f-kube-api-access-5xvhj\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327200 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dn6c\" (UniqueName: \"kubernetes.io/projected/768953b9-4fd1-45ac-bb0c-11dde7a825ff-kube-api-access-8dn6c\") pod \"multus-admission-controller-857f4d67dd-nvxls\" (UID: \"768953b9-4fd1-45ac-bb0c-11dde7a825ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327225 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/487ceb97-cea5-4120-900d-dcd405f2f574-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327239 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/487ceb97-cea5-4120-900d-dcd405f2f574-images\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327258 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f3dc288-a9c5-40b3-9170-52c8095c515c-webhook-cert\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327301 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f12434db-4aac-48f2-9911-5c04ad6b461d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327362 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b3a192-0bab-4362-8737-f4725a4ff976-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v7bhg\" (UID: \"d4b3a192-0bab-4362-8737-f4725a4ff976\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327391 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c76ed578-a747-48e3-9653-0e07b782ba8e-secret-volume\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327453 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f3dc288-a9c5-40b3-9170-52c8095c515c-apiservice-cert\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327482 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/167e3306-54e1-470a-a7d6-55b2742ca45e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327520 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327607 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/768953b9-4fd1-45ac-bb0c-11dde7a825ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nvxls\" (UID: \"768953b9-4fd1-45ac-bb0c-11dde7a825ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327628 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-certificates\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327644 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327661 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c76ed578-a747-48e3-9653-0e07b782ba8e-config-volume\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327677 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-config\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327693 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcmhl\" (UniqueName: \"kubernetes.io/projected/3f3dc288-a9c5-40b3-9170-52c8095c515c-kube-api-access-xcmhl\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327708 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn2jd\" (UniqueName: \"kubernetes.io/projected/d4b3a192-0bab-4362-8737-f4725a4ff976-kube-api-access-sn2jd\") pod \"package-server-manager-789f6589d5-v7bhg\" (UID: \"d4b3a192-0bab-4362-8737-f4725a4ff976\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327758 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd740900-c96d-4342-bedb-c8f5ef2acc49-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327777 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-trusted-ca\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327794 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-tls\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327821 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327851 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxvfg\" (UniqueName: \"kubernetes.io/projected/2fe39169-62e5-4364-bf39-1bd6cccbf231-kube-api-access-nxvfg\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327872 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f21d0ab3-26d3-4991-bc1f-28b90a55098f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327888 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrcmt\" (UniqueName: \"kubernetes.io/projected/fd740900-c96d-4342-bedb-c8f5ef2acc49-kube-api-access-hrcmt\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327930 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5px\" (UniqueName: \"kubernetes.io/projected/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-kube-api-access-lz5px\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.327966 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/167e3306-54e1-470a-a7d6-55b2742ca45e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328004 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2fe39169-62e5-4364-bf39-1bd6cccbf231-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328021 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd740900-c96d-4342-bedb-c8f5ef2acc49-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328045 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8dee4c98-7d07-4923-a8e5-47d322cc35d6-signing-cabundle\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328088 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f21d0ab3-26d3-4991-bc1f-28b90a55098f-srv-cert\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328105 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12434db-4aac-48f2-9911-5c04ad6b461d-config\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328122 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjmw\" (UniqueName: \"kubernetes.io/projected/c76ed578-a747-48e3-9653-0e07b782ba8e-kube-api-access-tbjmw\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.328203 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jqd2\" (UniqueName: \"kubernetes.io/projected/8dee4c98-7d07-4923-a8e5-47d322cc35d6-kube-api-access-9jqd2\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.329281 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:26.829262682 +0000 UTC m=+140.338743659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.330160 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3f3dc288-a9c5-40b3-9170-52c8095c515c-tmpfs\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.330756 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-certificates\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.333094 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/167e3306-54e1-470a-a7d6-55b2742ca45e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.333882 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2fe39169-62e5-4364-bf39-1bd6cccbf231-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.334896 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8dee4c98-7d07-4923-a8e5-47d322cc35d6-signing-key\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.335003 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.335413 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2fe39169-62e5-4364-bf39-1bd6cccbf231-proxy-tls\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.335521 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3f3dc288-a9c5-40b3-9170-52c8095c515c-webhook-cert\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.336446 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8dee4c98-7d07-4923-a8e5-47d322cc35d6-signing-cabundle\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.337408 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3f3dc288-a9c5-40b3-9170-52c8095c515c-apiservice-cert\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.337471 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-tls\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.337570 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-trusted-ca\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.337907 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b3a192-0bab-4362-8737-f4725a4ff976-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-v7bhg\" (UID: \"d4b3a192-0bab-4362-8737-f4725a4ff976\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.336315 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/167e3306-54e1-470a-a7d6-55b2742ca45e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.375224 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-bound-sa-token\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.410102 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk"] Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.427354 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8n9z\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-kube-api-access-q8n9z\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.428974 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcmhl\" (UniqueName: \"kubernetes.io/projected/3f3dc288-a9c5-40b3-9170-52c8095c515c-kube-api-access-xcmhl\") pod \"packageserver-d55dfcdfc-7hdnq\" (UID: \"3f3dc288-a9c5-40b3-9170-52c8095c515c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429455 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429509 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f12434db-4aac-48f2-9911-5c04ad6b461d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429538 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c76ed578-a747-48e3-9653-0e07b782ba8e-secret-volume\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429564 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429618 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429635 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77t4\" (UniqueName: \"kubernetes.io/projected/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-kube-api-access-x77t4\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429654 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429694 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/768953b9-4fd1-45ac-bb0c-11dde7a825ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nvxls\" (UID: \"768953b9-4fd1-45ac-bb0c-11dde7a825ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429711 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-registration-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429736 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429751 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c76ed578-a747-48e3-9653-0e07b782ba8e-config-volume\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429768 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-config\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429784 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429807 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd740900-c96d-4342-bedb-c8f5ef2acc49-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429835 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c28d8\" (UniqueName: \"kubernetes.io/projected/82d94e21-1c16-4233-a399-e34c89240e6d-kube-api-access-c28d8\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429869 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt7cf\" (UniqueName: \"kubernetes.io/projected/0a5809bc-0192-4557-b2db-0f011c4a2cd0-kube-api-access-nt7cf\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429885 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429932 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429970 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20a02c6c-b2eb-46c5-8568-29b7e394ad6e-cert\") pod \"ingress-canary-45h24\" (UID: \"20a02c6c-b2eb-46c5-8568-29b7e394ad6e\") " pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.429985 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430003 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430102 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f21d0ab3-26d3-4991-bc1f-28b90a55098f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430120 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrcmt\" (UniqueName: \"kubernetes.io/projected/fd740900-c96d-4342-bedb-c8f5ef2acc49-kube-api-access-hrcmt\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430135 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz5px\" (UniqueName: \"kubernetes.io/projected/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-kube-api-access-lz5px\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430149 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-audit-policies\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430172 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13f0d8ea-06bd-477e-86ec-bc46704d8784-config-volume\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430216 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd740900-c96d-4342-bedb-c8f5ef2acc49-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430230 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f0d8ea-06bd-477e-86ec-bc46704d8784-metrics-tls\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430255 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-mountpoint-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430270 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-plugins-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430285 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85cbfdb-67a4-47f3-9087-b76b960fbc62-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430314 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f21d0ab3-26d3-4991-bc1f-28b90a55098f-srv-cert\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430354 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12434db-4aac-48f2-9911-5c04ad6b461d-config\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430371 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjmw\" (UniqueName: \"kubernetes.io/projected/c76ed578-a747-48e3-9653-0e07b782ba8e-kube-api-access-tbjmw\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430410 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430450 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-csi-data-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430465 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49qhs\" (UniqueName: \"kubernetes.io/projected/487ceb97-cea5-4120-900d-dcd405f2f574-kube-api-access-49qhs\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430496 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82d94e21-1c16-4233-a399-e34c89240e6d-audit-dir\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430512 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430542 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-serving-cert\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430559 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430576 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kgk\" (UniqueName: \"kubernetes.io/projected/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-kube-api-access-x8kgk\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430631 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-socket-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430650 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85cbfdb-67a4-47f3-9087-b76b960fbc62-config\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.430675 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcpw\" (UniqueName: \"kubernetes.io/projected/13f0d8ea-06bd-477e-86ec-bc46704d8784-kube-api-access-bkcpw\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.431025 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn2jd\" (UniqueName: \"kubernetes.io/projected/d4b3a192-0bab-4362-8737-f4725a4ff976-kube-api-access-sn2jd\") pod \"package-server-manager-789f6589d5-v7bhg\" (UID: \"d4b3a192-0bab-4362-8737-f4725a4ff976\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.431808 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-certs\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.431869 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-node-bootstrap-token\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.431918 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q7wn\" (UniqueName: \"kubernetes.io/projected/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-kube-api-access-8q7wn\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.431943 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/487ceb97-cea5-4120-900d-dcd405f2f574-proxy-tls\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.431974 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f12434db-4aac-48f2-9911-5c04ad6b461d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432000 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85cbfdb-67a4-47f3-9087-b76b960fbc62-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432025 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn5fm\" (UniqueName: \"kubernetes.io/projected/20a02c6c-b2eb-46c5-8568-29b7e394ad6e-kube-api-access-fn5fm\") pod \"ingress-canary-45h24\" (UID: \"20a02c6c-b2eb-46c5-8568-29b7e394ad6e\") " pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432045 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432075 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xvhj\" (UniqueName: \"kubernetes.io/projected/f21d0ab3-26d3-4991-bc1f-28b90a55098f-kube-api-access-5xvhj\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432102 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dn6c\" (UniqueName: \"kubernetes.io/projected/768953b9-4fd1-45ac-bb0c-11dde7a825ff-kube-api-access-8dn6c\") pod \"multus-admission-controller-857f4d67dd-nvxls\" (UID: \"768953b9-4fd1-45ac-bb0c-11dde7a825ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432125 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432128 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-config\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432154 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/487ceb97-cea5-4120-900d-dcd405f2f574-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432185 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.432209 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/487ceb97-cea5-4120-900d-dcd405f2f574-images\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.433052 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd740900-c96d-4342-bedb-c8f5ef2acc49-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.433599 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.433622 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f12434db-4aac-48f2-9911-5c04ad6b461d-config\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.434244 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c76ed578-a747-48e3-9653-0e07b782ba8e-config-volume\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.435551 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:26.935534209 +0000 UTC m=+140.445015186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.435927 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/487ceb97-cea5-4120-900d-dcd405f2f574-images\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.436088 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-config\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.436306 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/487ceb97-cea5-4120-900d-dcd405f2f574-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.449229 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/768953b9-4fd1-45ac-bb0c-11dde7a825ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nvxls\" (UID: \"768953b9-4fd1-45ac-bb0c-11dde7a825ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.449317 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f21d0ab3-26d3-4991-bc1f-28b90a55098f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.449961 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.452796 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-serving-cert\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.457502 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.457750 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f12434db-4aac-48f2-9911-5c04ad6b461d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.459564 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd740900-c96d-4342-bedb-c8f5ef2acc49-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.459637 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f21d0ab3-26d3-4991-bc1f-28b90a55098f-srv-cert\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.460991 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c76ed578-a747-48e3-9653-0e07b782ba8e-secret-volume\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.460989 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.466275 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/487ceb97-cea5-4120-900d-dcd405f2f574-proxy-tls\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.479246 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jqd2\" (UniqueName: \"kubernetes.io/projected/8dee4c98-7d07-4923-a8e5-47d322cc35d6-kube-api-access-9jqd2\") pod \"service-ca-9c57cc56f-g462z\" (UID: \"8dee4c98-7d07-4923-a8e5-47d322cc35d6\") " pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.510770 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxvfg\" (UniqueName: \"kubernetes.io/projected/2fe39169-62e5-4364-bf39-1bd6cccbf231-kube-api-access-nxvfg\") pod \"machine-config-controller-84d6567774-rxbwv\" (UID: \"2fe39169-62e5-4364-bf39-1bd6cccbf231\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.518416 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz5px\" (UniqueName: \"kubernetes.io/projected/f1324f2e-d3ea-444d-9e6c-ff3ea41d6502-kube-api-access-lz5px\") pod \"openshift-apiserver-operator-796bbdcf4f-p7rww\" (UID: \"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.526296 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.526308 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr"] Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.528505 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrcmt\" (UniqueName: \"kubernetes.io/projected/fd740900-c96d-4342-bedb-c8f5ef2acc49-kube-api-access-hrcmt\") pod \"kube-storage-version-migrator-operator-b67b599dd-x8bzp\" (UID: \"fd740900-c96d-4342-bedb-c8f5ef2acc49\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533155 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.533310 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.033285638 +0000 UTC m=+140.542766615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533449 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533522 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77t4\" (UniqueName: \"kubernetes.io/projected/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-kube-api-access-x77t4\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533597 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533625 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-registration-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533643 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533692 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c28d8\" (UniqueName: \"kubernetes.io/projected/82d94e21-1c16-4233-a399-e34c89240e6d-kube-api-access-c28d8\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533714 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt7cf\" (UniqueName: \"kubernetes.io/projected/0a5809bc-0192-4557-b2db-0f011c4a2cd0-kube-api-access-nt7cf\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533730 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533776 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20a02c6c-b2eb-46c5-8568-29b7e394ad6e-cert\") pod \"ingress-canary-45h24\" (UID: \"20a02c6c-b2eb-46c5-8568-29b7e394ad6e\") " pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533794 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533810 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533861 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-audit-policies\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533877 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13f0d8ea-06bd-477e-86ec-bc46704d8784-config-volume\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533919 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f0d8ea-06bd-477e-86ec-bc46704d8784-metrics-tls\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533943 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-mountpoint-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533961 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-plugins-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.533976 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85cbfdb-67a4-47f3-9087-b76b960fbc62-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.534034 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-csi-data-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.534050 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82d94e21-1c16-4233-a399-e34c89240e6d-audit-dir\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.534084 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.534106 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.534125 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-socket-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.534959 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-mountpoint-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.535276 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.535509 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.03550123 +0000 UTC m=+140.544982207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.536028 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-registration-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.536171 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-csi-data-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.536234 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-plugins-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.536531 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-audit-policies\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.537288 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.537658 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.537977 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20a02c6c-b2eb-46c5-8568-29b7e394ad6e-cert\") pod \"ingress-canary-45h24\" (UID: \"20a02c6c-b2eb-46c5-8568-29b7e394ad6e\") " pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.538282 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82d94e21-1c16-4233-a399-e34c89240e6d-audit-dir\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.538793 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a5809bc-0192-4557-b2db-0f011c4a2cd0-socket-dir\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.539834 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540658 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85cbfdb-67a4-47f3-9087-b76b960fbc62-config\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540689 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcpw\" (UniqueName: \"kubernetes.io/projected/13f0d8ea-06bd-477e-86ec-bc46704d8784-kube-api-access-bkcpw\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540723 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-certs\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540742 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-node-bootstrap-token\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540803 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85cbfdb-67a4-47f3-9087-b76b960fbc62-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540820 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540828 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn5fm\" (UniqueName: \"kubernetes.io/projected/20a02c6c-b2eb-46c5-8568-29b7e394ad6e-kube-api-access-fn5fm\") pod \"ingress-canary-45h24\" (UID: \"20a02c6c-b2eb-46c5-8568-29b7e394ad6e\") " pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540842 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540884 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540902 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.540919 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.541182 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13f0d8ea-06bd-477e-86ec-bc46704d8784-config-volume\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.541632 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f85cbfdb-67a4-47f3-9087-b76b960fbc62-config\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.541804 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.549527 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.552510 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.554404 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.554630 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.559511 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f85cbfdb-67a4-47f3-9087-b76b960fbc62-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.559618 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.563079 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kgk\" (UniqueName: \"kubernetes.io/projected/b7c8f8d0-10da-4d63-847f-bb9f584c2ff1-kube-api-access-x8kgk\") pod \"service-ca-operator-777779d784-mj5lw\" (UID: \"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.564583 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/13f0d8ea-06bd-477e-86ec-bc46704d8784-metrics-tls\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.567220 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.570272 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-node-bootstrap-token\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.572027 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-certs\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.582447 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49qhs\" (UniqueName: \"kubernetes.io/projected/487ceb97-cea5-4120-900d-dcd405f2f574-kube-api-access-49qhs\") pod \"machine-config-operator-74547568cd-rkln8\" (UID: \"487ceb97-cea5-4120-900d-dcd405f2f574\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.592884 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q7wn\" (UniqueName: \"kubernetes.io/projected/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-kube-api-access-8q7wn\") pod \"marketplace-operator-79b997595-dvnc9\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.612810 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f12434db-4aac-48f2-9911-5c04ad6b461d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7t6t7\" (UID: \"f12434db-4aac-48f2-9911-5c04ad6b461d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.632978 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xvhj\" (UniqueName: \"kubernetes.io/projected/f21d0ab3-26d3-4991-bc1f-28b90a55098f-kube-api-access-5xvhj\") pod \"olm-operator-6b444d44fb-mg9gr\" (UID: \"f21d0ab3-26d3-4991-bc1f-28b90a55098f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.641611 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.641991 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.141971673 +0000 UTC m=+140.651452650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.646715 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dn6c\" (UniqueName: \"kubernetes.io/projected/768953b9-4fd1-45ac-bb0c-11dde7a825ff-kube-api-access-8dn6c\") pod \"multus-admission-controller-857f4d67dd-nvxls\" (UID: \"768953b9-4fd1-45ac-bb0c-11dde7a825ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.656302 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.660537 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-j44sk"] Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.685992 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.686553 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjmw\" (UniqueName: \"kubernetes.io/projected/c76ed578-a747-48e3-9653-0e07b782ba8e-kube-api-access-tbjmw\") pod \"collect-profiles-29424345-m2qcx\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.715039 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.732210 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77t4\" (UniqueName: \"kubernetes.io/projected/2cb2bbd4-a42a-43c5-a912-f3a02418a4f1-kube-api-access-x77t4\") pod \"machine-config-server-tl6gg\" (UID: \"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1\") " pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.743479 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.743756 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.243744784 +0000 UTC m=+140.753225761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.749725 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt7cf\" (UniqueName: \"kubernetes.io/projected/0a5809bc-0192-4557-b2db-0f011c4a2cd0-kube-api-access-nt7cf\") pod \"csi-hostpathplugin-tvrr4\" (UID: \"0a5809bc-0192-4557-b2db-0f011c4a2cd0\") " pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.767938 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-g462z" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.777300 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.784416 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.801886 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.804831 4924 generic.go:334] "Generic (PLEG): container finished" podID="7e529814-a09b-4dff-b79d-5525a16ce269" containerID="fe7da8ddfdf0ca8e8ffc66c326bf8670ce2826e60d4dfa383b7c29a14843e505" exitCode=0 Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.809835 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c28d8\" (UniqueName: \"kubernetes.io/projected/82d94e21-1c16-4233-a399-e34c89240e6d-kube-api-access-c28d8\") pod \"oauth-openshift-558db77b4-p7n6k\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.810065 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.817981 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f85cbfdb-67a4-47f3-9087-b76b960fbc62-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bp59t\" (UID: \"f85cbfdb-67a4-47f3-9087-b76b960fbc62\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.828985 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcpw\" (UniqueName: \"kubernetes.io/projected/13f0d8ea-06bd-477e-86ec-bc46704d8784-kube-api-access-bkcpw\") pod \"dns-default-j96gb\" (UID: \"13f0d8ea-06bd-477e-86ec-bc46704d8784\") " pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.837710 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.841383 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.844588 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.844752 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.344726604 +0000 UTC m=+140.854207581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.844851 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.845313 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.34529728 +0000 UTC m=+140.854778257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.856135 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn5fm\" (UniqueName: \"kubernetes.io/projected/20a02c6c-b2eb-46c5-8568-29b7e394ad6e-kube-api-access-fn5fm\") pod \"ingress-canary-45h24\" (UID: \"20a02c6c-b2eb-46c5-8568-29b7e394ad6e\") " pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.859760 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.867100 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.874590 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.881720 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-45h24" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882041 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" event={"ID":"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34","Type":"ContainerStarted","Data":"c83b12208e15b0e05997f7607a6e56d15833bec2d718eec646f56851bd8a6cc3"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882095 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" event={"ID":"16d4f535-7844-45b1-8a32-1b786e5f1b89","Type":"ContainerStarted","Data":"3cb5f0b4ed45857691b97a358280982f44b9cfaa48db3b4f191692877d932948"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882110 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" event={"ID":"7ae622b9-9a50-4f4c-8bf0-e845579c5f44","Type":"ContainerStarted","Data":"6808553c08b3aa8a2710aecd4d50a7c59955125d7333565b3fe8e7ddde9a1b64"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882122 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9dvjv" event={"ID":"4c3c7d59-0131-4a77-9828-7a78ff18a8ab","Type":"ContainerStarted","Data":"eda3715d33a30453fb4fe38a699283fc2ca69fda40e115c45066d89a9431b581"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882135 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4gbtq" event={"ID":"4632c4d1-bc4e-41f4-89e8-4702ab9397c7","Type":"ContainerStarted","Data":"cc83c8a1bc0b716e898568d9a4ae3f98ce03c36cd6a27246b098767132e848d5"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882164 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" event={"ID":"7e529814-a09b-4dff-b79d-5525a16ce269","Type":"ContainerDied","Data":"fe7da8ddfdf0ca8e8ffc66c326bf8670ce2826e60d4dfa383b7c29a14843e505"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882179 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" event={"ID":"30e85af7-7db3-49b8-8a77-77f4f5783916","Type":"ContainerStarted","Data":"993f0b97bfea8039e4d731da05e1022a5ed33a4788f89168c20b844a31c7294b"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882191 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mcpjh" event={"ID":"4fb40320-55f0-4b7c-9943-29a8abdf5943","Type":"ContainerStarted","Data":"8c5e2bf6e7e5ed8ec8eb47dd2ef266e166204951e05681169a4f4eff3d105e81"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.882202 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" event={"ID":"71f847e8-3ec3-43fb-a33e-6c71a867e160","Type":"ContainerStarted","Data":"cb9436f3b54edc6bc5b15c2570b524d898574bae970224161cc8ec6410ba87d5"} Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.888482 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tl6gg" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.915405 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.946664 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.946887 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.446856355 +0000 UTC m=+140.956337342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:26 crc kubenswrapper[4924]: I1211 13:55:26.947052 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:26 crc kubenswrapper[4924]: E1211 13:55:26.947806 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.447791831 +0000 UTC m=+140.957272888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.048742 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.049215 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.549199042 +0000 UTC m=+141.058680019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.150965 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.650953413 +0000 UTC m=+141.160434390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.150038 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.259978 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.260518 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.760490342 +0000 UTC m=+141.269971389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.340142 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.352259 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.362050 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.362718 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.862702805 +0000 UTC m=+141.372183782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.379470 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.403386 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dd9cn"] Dec 11 13:55:27 crc kubenswrapper[4924]: W1211 13:55:27.435992 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfa27a09_ad13_4e29_8687_aab1bb1eb438.slice/crio-36f5e6fa2bb80cd150d8b9f50f6ec578b1d884c0974859fef8cb7fecd7f824fe WatchSource:0}: Error finding container 36f5e6fa2bb80cd150d8b9f50f6ec578b1d884c0974859fef8cb7fecd7f824fe: Status 404 returned error can't find the container with id 36f5e6fa2bb80cd150d8b9f50f6ec578b1d884c0974859fef8cb7fecd7f824fe Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.462889 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.463172 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:27.96315701 +0000 UTC m=+141.472637987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.565069 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.565422 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.065406644 +0000 UTC m=+141.574887621 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.626566 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.645609 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6cqwz"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.651749 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-25w25"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.660752 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-qqvrl"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.665221 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.665817 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.665923 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.16590723 +0000 UTC m=+141.675388207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.666091 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.666544 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.166529957 +0000 UTC m=+141.676010934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.766593 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.766903 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.266889229 +0000 UTC m=+141.776370206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.869228 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.869656 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.369640698 +0000 UTC m=+141.879121675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.880046 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" event={"ID":"cbd3bd95-adc7-417f-8128-e85165f8c2af","Type":"ContainerStarted","Data":"67883ce0628e17798b7823b9424feed95a44cbf94464f884e090850165f5a23f"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.896660 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" event={"ID":"71f847e8-3ec3-43fb-a33e-6c71a867e160","Type":"ContainerStarted","Data":"74c8fce246fb1c7fbeab89ad8a3ce00723fffb7ced6e6818076c61364df385fb"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.910512 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.917805 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" event={"ID":"bfa27a09-ad13-4e29-8687-aab1bb1eb438","Type":"ContainerStarted","Data":"36f5e6fa2bb80cd150d8b9f50f6ec578b1d884c0974859fef8cb7fecd7f824fe"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.927452 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" event={"ID":"4cb1b5d8-52a1-4274-9ccc-45ac4533481d","Type":"ContainerStarted","Data":"18ba38fcf48e560d0b14b458f6dbddb5889f65d24d668b2467eeb997e175eeec"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.931903 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pb2zr"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.937575 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.940969 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.943548 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mcpjh" event={"ID":"4fb40320-55f0-4b7c-9943-29a8abdf5943","Type":"ContainerStarted","Data":"88d8c35d32391a3b9a436c3420c67620243cf2694ad257d0e9a477fc15d15c88"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.949430 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.949474 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj"] Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.951442 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" event={"ID":"338bc6d6-6434-4f98-bc51-e5aaa047dd58","Type":"ContainerStarted","Data":"bfe631d378dc3e73c8ca6dc0327d842fe6bfad03dcbc360196744c9dabcd4dee"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.954646 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" event={"ID":"7e529814-a09b-4dff-b79d-5525a16ce269","Type":"ContainerStarted","Data":"87d4e82f4fd7f5be77c5fdc426a59ada995c859e77656b3f1abb725412171241"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.955701 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" event={"ID":"563b2379-6f6c-4604-90e7-786d71191a32","Type":"ContainerStarted","Data":"61af7559a0d785631c485785e4c887cdb29dc090acb9fd8e9e3d7b16852c2267"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.964284 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" event={"ID":"30e85af7-7db3-49b8-8a77-77f4f5783916","Type":"ContainerStarted","Data":"21c7782ba80d84c18824c48f6d49b952121219f7a348da8f50a0f5130dbd0e5d"} Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.970368 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:27 crc kubenswrapper[4924]: E1211 13:55:27.970630 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.470614057 +0000 UTC m=+141.980095034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.988083 4924 generic.go:334] "Generic (PLEG): container finished" podID="c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34" containerID="279a162b20ddb4113b63b6fde08ae0e44ebf39413e91438c6f5ea4bd9bfa3c4e" exitCode=0 Dec 11 13:55:27 crc kubenswrapper[4924]: I1211 13:55:27.988155 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" event={"ID":"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34","Type":"ContainerDied","Data":"279a162b20ddb4113b63b6fde08ae0e44ebf39413e91438c6f5ea4bd9bfa3c4e"} Dec 11 13:55:27 crc kubenswrapper[4924]: W1211 13:55:27.989009 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda36823c9_41a3_4eb3_9e85_634f33d00f85.slice/crio-0f71aba128312dd96fef51a395d0adeb19a3e934aeb38c19cb4535a3433a33eb WatchSource:0}: Error finding container 0f71aba128312dd96fef51a395d0adeb19a3e934aeb38c19cb4535a3433a33eb: Status 404 returned error can't find the container with id 0f71aba128312dd96fef51a395d0adeb19a3e934aeb38c19cb4535a3433a33eb Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.009383 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" event={"ID":"b94a998f-0317-4fb2-9633-c68c86337a93","Type":"ContainerStarted","Data":"cd8f7b1d868921ea1569574f669d34f2a01d8e7c440ecb934d7a339a4b208e02"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.018779 4924 generic.go:334] "Generic (PLEG): container finished" podID="16d4f535-7844-45b1-8a32-1b786e5f1b89" containerID="f38793665cdf787bf39c62bdcdd550e9a8025e4e2a01fd29535706e206bb3622" exitCode=0 Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.018975 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" event={"ID":"16d4f535-7844-45b1-8a32-1b786e5f1b89","Type":"ContainerDied","Data":"f38793665cdf787bf39c62bdcdd550e9a8025e4e2a01fd29535706e206bb3622"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.028558 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" event={"ID":"488b109d-f524-4b78-a1a9-d07a1178236d","Type":"ContainerStarted","Data":"0aef877a9b58fd1237e4ba541b82a162c8bb0d2437aad23cb5c6aea32bceee3b"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.028608 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" event={"ID":"488b109d-f524-4b78-a1a9-d07a1178236d","Type":"ContainerStarted","Data":"9172febd5aaa7f7e3e450b0fc7e3b34108968964dc77371d15f11a76558cd35b"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.078411 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.078977 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4gbtq" event={"ID":"4632c4d1-bc4e-41f4-89e8-4702ab9397c7","Type":"ContainerStarted","Data":"9670464941f81945de1b342419960310f0f49ba692acc543d5a809dd1a3fb453"} Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.079431 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.579389854 +0000 UTC m=+142.088870921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.085087 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" event={"ID":"82848f0a-6b6c-442a-99b7-6db2a8076fee","Type":"ContainerStarted","Data":"e2d1947f697b6f9c93c6a91b6d518336bb816f411aabece6aac30e752ad1582f"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.093580 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" event={"ID":"b96c821c-a977-4a01-91d1-9c7df59ae49b","Type":"ContainerStarted","Data":"ff823afea1655473999eafdbeedd208c8e12d224d8b06ae01fd21b4eefde5f6e"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.094027 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.096068 4924 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-sc25c container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.096127 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" podUID="b96c821c-a977-4a01-91d1-9c7df59ae49b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.100994 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" event={"ID":"7ae622b9-9a50-4f4c-8bf0-e845579c5f44","Type":"ContainerStarted","Data":"142fe8436311e34a0fe52f093173a5e355471a52b33639636f4b8ce9619a107f"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.106887 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9dvjv" event={"ID":"4c3c7d59-0131-4a77-9828-7a78ff18a8ab","Type":"ContainerStarted","Data":"adcff40c86415660737dc147c391673b4968dec2adcb22436efb6ed2a9ac6d93"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.107450 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.126974 4924 patch_prober.go:28] interesting pod/downloads-7954f5f757-9dvjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.127369 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9dvjv" podUID="4c3c7d59-0131-4a77-9828-7a78ff18a8ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.129988 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mcpjh" podStartSLOduration=122.129965391 podStartE2EDuration="2m2.129965391s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.129539309 +0000 UTC m=+141.639020286" watchObservedRunningTime="2025-12-11 13:55:28.129965391 +0000 UTC m=+141.639446368" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.143652 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tl6gg" event={"ID":"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1","Type":"ContainerStarted","Data":"0b91b2f37c64b804c0ba87cc2b9a76329d2336da39fb34aa5976272b2484a018"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.143883 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tl6gg" event={"ID":"2cb2bbd4-a42a-43c5-a912-f3a02418a4f1","Type":"ContainerStarted","Data":"5362ccfa49a1e77164e7221b33637241601ff46970b1095fe05771ec3e0d4a6d"} Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.179597 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.179906 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.67988529 +0000 UTC m=+142.189366277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.179952 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.181648 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.681637169 +0000 UTC m=+142.191118146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.185796 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-6tc2t" podStartSLOduration=122.185771274 podStartE2EDuration="2m2.185771274s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.173393208 +0000 UTC m=+141.682874215" watchObservedRunningTime="2025-12-11 13:55:28.185771274 +0000 UTC m=+141.695252251" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.202890 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9dvjv" podStartSLOduration=122.202868833 podStartE2EDuration="2m2.202868833s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.202617446 +0000 UTC m=+141.712098423" watchObservedRunningTime="2025-12-11 13:55:28.202868833 +0000 UTC m=+141.712349810" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.236541 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.238853 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.241790 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.241853 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.254452 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-j44sk" podStartSLOduration=122.254435748 podStartE2EDuration="2m2.254435748s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.251836945 +0000 UTC m=+141.761317922" watchObservedRunningTime="2025-12-11 13:55:28.254435748 +0000 UTC m=+141.763916725" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.254860 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.265689 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:28 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:28 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:28 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.265753 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.270235 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.283496 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.283891 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.284751 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.784724077 +0000 UTC m=+142.294205054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.296249 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tl6gg" podStartSLOduration=5.296232919 podStartE2EDuration="5.296232919s" podCreationTimestamp="2025-12-11 13:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.294519581 +0000 UTC m=+141.804000558" watchObservedRunningTime="2025-12-11 13:55:28.296232919 +0000 UTC m=+141.805713896" Dec 11 13:55:28 crc kubenswrapper[4924]: W1211 13:55:28.341671 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487ceb97_cea5_4120_900d_dcd405f2f574.slice/crio-8c9b01c9f0c50f35c7e4834d59f1940af73577bc8a4983a8bed003f5cf444c17 WatchSource:0}: Error finding container 8c9b01c9f0c50f35c7e4834d59f1940af73577bc8a4983a8bed003f5cf444c17: Status 404 returned error can't find the container with id 8c9b01c9f0c50f35c7e4834d59f1940af73577bc8a4983a8bed003f5cf444c17 Dec 11 13:55:28 crc kubenswrapper[4924]: W1211 13:55:28.347702 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf12434db_4aac_48f2_9911_5c04ad6b461d.slice/crio-b36a7aee0fd99f71277c95700db7f77776fe47ed61a3c32f7ac8131a5856cb97 WatchSource:0}: Error finding container b36a7aee0fd99f71277c95700db7f77776fe47ed61a3c32f7ac8131a5856cb97: Status 404 returned error can't find the container with id b36a7aee0fd99f71277c95700db7f77776fe47ed61a3c32f7ac8131a5856cb97 Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.381640 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" podStartSLOduration=122.381623221 podStartE2EDuration="2m2.381623221s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.381276742 +0000 UTC m=+141.890757719" watchObservedRunningTime="2025-12-11 13:55:28.381623221 +0000 UTC m=+141.891104188" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.385424 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.385754 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.885742147 +0000 UTC m=+142.395223124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.426056 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4gbtq" podStartSLOduration=123.426034206 podStartE2EDuration="2m3.426034206s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:28.417361843 +0000 UTC m=+141.926842820" watchObservedRunningTime="2025-12-11 13:55:28.426034206 +0000 UTC m=+141.935515183" Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.487608 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.490652 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:28.990594004 +0000 UTC m=+142.500074991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.611295 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.612951 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.112934952 +0000 UTC m=+142.622415929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.621218 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dvnc9"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.625777 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tvrr4"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.646845 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.651485 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-j96gb"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.663866 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.686492 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-g462z"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.695704 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-45h24"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.697892 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p7n6k"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.703822 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nvxls"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.717147 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.717522 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.217479061 +0000 UTC m=+142.726960038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.717752 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.718200 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.21817502 +0000 UTC m=+142.727655997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: W1211 13:55:28.719619 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a5809bc_0192_4557_b2db_0f011c4a2cd0.slice/crio-d564df102b3bb4094deaab60a24ab8a2b8795387843f3215b495e8d571b86755 WatchSource:0}: Error finding container d564df102b3bb4094deaab60a24ab8a2b8795387843f3215b495e8d571b86755: Status 404 returned error can't find the container with id d564df102b3bb4094deaab60a24ab8a2b8795387843f3215b495e8d571b86755 Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.759513 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t"] Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.823881 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.824182 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.324154009 +0000 UTC m=+142.833634986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.825377 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.826909 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.326887506 +0000 UTC m=+142.836368473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.941405 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.941623 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.44159915 +0000 UTC m=+142.951080127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:28 crc kubenswrapper[4924]: I1211 13:55:28.942095 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:28 crc kubenswrapper[4924]: E1211 13:55:28.942454 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.442442994 +0000 UTC m=+142.951923971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.043385 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.046230 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.54617658 +0000 UTC m=+143.055657557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.147160 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.149394 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.649369451 +0000 UTC m=+143.158850428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.236610 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:29 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:29 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:29 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.236665 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.244255 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" event={"ID":"b96c821c-a977-4a01-91d1-9c7df59ae49b","Type":"ContainerStarted","Data":"b6ee0ec4ebac346a9154663f8d0f53c91ea9da9fdd7d51b011f37f8d90a8fd62"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.253390 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.253722 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.753708663 +0000 UTC m=+143.263189630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.255222 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" event={"ID":"487ceb97-cea5-4120-900d-dcd405f2f574","Type":"ContainerStarted","Data":"402f7bb9d06929ceeb50cf0f223e535e5fbe377798f97b78139c4370546c4db3"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.255258 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" event={"ID":"487ceb97-cea5-4120-900d-dcd405f2f574","Type":"ContainerStarted","Data":"8c9b01c9f0c50f35c7e4834d59f1940af73577bc8a4983a8bed003f5cf444c17"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.280449 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" event={"ID":"fd740900-c96d-4342-bedb-c8f5ef2acc49","Type":"ContainerStarted","Data":"07901c2503da6f8562a4da8c538728f515cc000afb5c7dd6f2d946b0119e1666"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.327390 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" event={"ID":"f12434db-4aac-48f2-9911-5c04ad6b461d","Type":"ContainerStarted","Data":"b36a7aee0fd99f71277c95700db7f77776fe47ed61a3c32f7ac8131a5856cb97"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.330952 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerStarted","Data":"ef964f25e06922d014e60cf4927b8cfeb8a7283f341c368d438a985acd5b3d2a"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.332353 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" event={"ID":"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc","Type":"ContainerStarted","Data":"123342d730e18b54344d25f6fe72ff5a54cfd05bd96703cda5ba782485ddc773"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.332374 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" event={"ID":"e1cfbc9d-cb80-4d25-b6fa-2a2546154ccc","Type":"ContainerStarted","Data":"f5f3f031bf7efccf7911dcaca3e563550a13a0b5c02e00bc4fb4cb262e8f08c6"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.339709 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j96gb" event={"ID":"13f0d8ea-06bd-477e-86ec-bc46704d8784","Type":"ContainerStarted","Data":"b108b4e61de06d9701bf07302c870b697d23ee55aad352f6cb5d9d070594ac5a"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.342091 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" event={"ID":"b94a998f-0317-4fb2-9633-c68c86337a93","Type":"ContainerStarted","Data":"8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.342733 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.346265 4924 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6cqwz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.346298 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" podUID="b94a998f-0317-4fb2-9633-c68c86337a93" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.354707 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqvgm" podStartSLOduration=123.354692242 podStartE2EDuration="2m3.354692242s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.352616814 +0000 UTC m=+142.862097791" watchObservedRunningTime="2025-12-11 13:55:29.354692242 +0000 UTC m=+142.864173219" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.355720 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" event={"ID":"563b2379-6f6c-4604-90e7-786d71191a32","Type":"ContainerStarted","Data":"1ec51b08bbed478897edb134660e3097b8fafce948a1e31d5a5374114534284a"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.355886 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.356178 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.856166653 +0000 UTC m=+143.365647630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.361118 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" event={"ID":"768953b9-4fd1-45ac-bb0c-11dde7a825ff","Type":"ContainerStarted","Data":"dcbf79a9a6c04585ca327c885f98fdcb757d35de387bf5ff750a7981bd05c7d6"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.362233 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" event={"ID":"2fe39169-62e5-4364-bf39-1bd6cccbf231","Type":"ContainerStarted","Data":"9f2b37d29593b6e3f6bbac16f246dd5be7c59758c345a89ff7981941a56ec54e"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.390079 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" event={"ID":"f21d0ab3-26d3-4991-bc1f-28b90a55098f","Type":"ContainerStarted","Data":"6ae5c40b00517264a9fec12801152091aea30ea0c01c31006fdca39e3aa2f0b1"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.390128 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" event={"ID":"f21d0ab3-26d3-4991-bc1f-28b90a55098f","Type":"ContainerStarted","Data":"88fc0e0ff7d597d137747d5ce76a95df53a66b64382b986889ec59f2998e5c77"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.390912 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.392676 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" podStartSLOduration=123.392666226 podStartE2EDuration="2m3.392666226s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.39138868 +0000 UTC m=+142.900869647" watchObservedRunningTime="2025-12-11 13:55:29.392666226 +0000 UTC m=+142.902147203" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.394543 4924 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mg9gr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.394578 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" podUID="f21d0ab3-26d3-4991-bc1f-28b90a55098f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.411949 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" event={"ID":"74f2dd5a-bc29-4b79-bccb-28c4b32f1947","Type":"ContainerStarted","Data":"6cd8569c9f82a09c739ca4190b49670e75d0beba0c71e63c5838dc74e12ad9a4"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.411990 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" event={"ID":"74f2dd5a-bc29-4b79-bccb-28c4b32f1947","Type":"ContainerStarted","Data":"1a1660c6672d7ce7a4646e4c2c81605d492eae22ae4553f6001a469ec0e1c516"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.412002 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" event={"ID":"74f2dd5a-bc29-4b79-bccb-28c4b32f1947","Type":"ContainerStarted","Data":"c66612796c8c30225035eaec323a1cb5b078ba028a5eabbb5b2ad0d5ec798c48"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.425253 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" event={"ID":"82d94e21-1c16-4233-a399-e34c89240e6d","Type":"ContainerStarted","Data":"f6585b5ff9d7dc5d65ea95dd64ad8fcf3ceaa4643da72497453a72a239205014"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.428642 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" podStartSLOduration=123.428621373 podStartE2EDuration="2m3.428621373s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.423693485 +0000 UTC m=+142.933174472" watchObservedRunningTime="2025-12-11 13:55:29.428621373 +0000 UTC m=+142.938102350" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.428776 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" event={"ID":"4cb1b5d8-52a1-4274-9ccc-45ac4533481d","Type":"ContainerStarted","Data":"ae5ca1db994ff404e2a4519c340543feca4381ec2abdab7c9da9064a22c0cea7"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.429498 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.443522 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.449037 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" event={"ID":"c35e5a3f-f63f-4dc2-b439-2da4d6e8cd34","Type":"ContainerStarted","Data":"d1fb3e4aaa92de3e6136fcdb905daadcb3553e7fda95a8f3f70e422ea0a1ef3b"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.449835 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.461652 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.463064 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:29.963046408 +0000 UTC m=+143.472527385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.474565 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" event={"ID":"82848f0a-6b6c-442a-99b7-6db2a8076fee","Type":"ContainerStarted","Data":"d7ead512f24eca70886aaa87610c32e52cbfe3558e7c6003b05f665e3e222199"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.475451 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.476387 4924 patch_prober.go:28] interesting pod/console-operator-58897d9998-qqvrl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.476420 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" podUID="82848f0a-6b6c-442a-99b7-6db2a8076fee" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.490815 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-n9z8x" podStartSLOduration=123.490798115 podStartE2EDuration="2m3.490798115s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.483720757 +0000 UTC m=+142.993201734" watchObservedRunningTime="2025-12-11 13:55:29.490798115 +0000 UTC m=+143.000279092" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.536590 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" event={"ID":"0a5809bc-0192-4557-b2db-0f011c4a2cd0","Type":"ContainerStarted","Data":"d564df102b3bb4094deaab60a24ab8a2b8795387843f3215b495e8d571b86755"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.546308 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" podStartSLOduration=124.54629066 podStartE2EDuration="2m4.54629066s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.544406977 +0000 UTC m=+143.053887954" watchObservedRunningTime="2025-12-11 13:55:29.54629066 +0000 UTC m=+143.055771637" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.570261 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.572115 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.072097873 +0000 UTC m=+143.581578950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.577105 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" event={"ID":"16d4f535-7844-45b1-8a32-1b786e5f1b89","Type":"ContainerStarted","Data":"9ef69abfcda0d559415ed9ed9afb22d9c11813a9a9a137b9f3873807421b7686"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.614950 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" event={"ID":"c07d7a97-69d8-4feb-8cd6-6cbbd40cb057","Type":"ContainerStarted","Data":"bd136eb9dcd89adb686612acbf1fc3f84c348c9ae3e9b113d54e04ef415e1a9d"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.615185 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" event={"ID":"c07d7a97-69d8-4feb-8cd6-6cbbd40cb057","Type":"ContainerStarted","Data":"24fb78d3fb0d51eb403f812f69f78913d66093a56f8c87f22c6ad5075d8be914"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.615250 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" event={"ID":"c07d7a97-69d8-4feb-8cd6-6cbbd40cb057","Type":"ContainerStarted","Data":"77b1e3eb59aed69bbe54c5ee665fc49f77a811bbbe294634a103d09c5b790a3c"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.616159 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.620247 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" podStartSLOduration=124.620228501 podStartE2EDuration="2m4.620228501s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.585479928 +0000 UTC m=+143.094960905" watchObservedRunningTime="2025-12-11 13:55:29.620228501 +0000 UTC m=+143.129709478" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.624475 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" event={"ID":"bfa27a09-ad13-4e29-8687-aab1bb1eb438","Type":"ContainerStarted","Data":"3be86da75243b83e634b275558385e22e7fcefedc3a98196254411e538ba7945"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.626451 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6cqwz"] Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.632474 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xc4q4" podStartSLOduration=123.632459224 podStartE2EDuration="2m3.632459224s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.632127375 +0000 UTC m=+143.141608352" watchObservedRunningTime="2025-12-11 13:55:29.632459224 +0000 UTC m=+143.141940201" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.637769 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" event={"ID":"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1","Type":"ContainerStarted","Data":"37fe52a7c54275d6d5ffde3cb9f8c54926987c0d0907dc199c4d6587a240ba4d"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.639765 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" event={"ID":"c76ed578-a747-48e3-9653-0e07b782ba8e","Type":"ContainerStarted","Data":"b86cb4a8808f34ee790a661917facd75b9b46b89ae998b50dca1432504e1e4c4"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.639796 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" event={"ID":"c76ed578-a747-48e3-9653-0e07b782ba8e","Type":"ContainerStarted","Data":"1b5e9371797f113cd782117e4fce1c14847ee4cbbbe0af0793b5be46f11c0865"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.676891 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.677173 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.177157746 +0000 UTC m=+143.686638723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.678787 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.679602 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.179589635 +0000 UTC m=+143.689070602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.699033 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" event={"ID":"338bc6d6-6434-4f98-bc51-e5aaa047dd58","Type":"ContainerStarted","Data":"24b4b3f3e43ff9c0da3b33178d2c6fbe3add641f0e2047cf19e951287d2ffdb3"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.705224 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jq8sj" podStartSLOduration=123.705202582 podStartE2EDuration="2m3.705202582s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.687835876 +0000 UTC m=+143.197316853" watchObservedRunningTime="2025-12-11 13:55:29.705202582 +0000 UTC m=+143.214683559" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.709041 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" event={"ID":"3f3dc288-a9c5-40b3-9170-52c8095c515c","Type":"ContainerStarted","Data":"9308aaa76105285fe0609e341fdabb89f3d8f4b18d5ce726e502812de377033e"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.709092 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" event={"ID":"3f3dc288-a9c5-40b3-9170-52c8095c515c","Type":"ContainerStarted","Data":"7874599efe34422363e9b76e0146b093771862d55ec82a9a989696333ef872c5"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.710444 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.721937 4924 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7hdnq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.725698 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" podUID="3f3dc288-a9c5-40b3-9170-52c8095c515c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.761408 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" event={"ID":"cbd3bd95-adc7-417f-8128-e85165f8c2af","Type":"ContainerStarted","Data":"e225b4537908e1b9f2ab7f0aecefc9ac9d7f441e1c820da16b0b3c2204871285"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.761449 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" event={"ID":"cbd3bd95-adc7-417f-8128-e85165f8c2af","Type":"ContainerStarted","Data":"39a70e0003adba30e0ddd9fd370b766964feb73e0efd97f4e9aed3c61745c1cd"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.762417 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" podStartSLOduration=123.762398815 podStartE2EDuration="2m3.762398815s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.761578002 +0000 UTC m=+143.271058979" watchObservedRunningTime="2025-12-11 13:55:29.762398815 +0000 UTC m=+143.271879792" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.779902 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.780419 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.280388159 +0000 UTC m=+143.789869136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.792705 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" event={"ID":"d4b3a192-0bab-4362-8737-f4725a4ff976","Type":"ContainerStarted","Data":"d658c1077f405038374c8bcd0fea20a5023018d9c3989c2264e011ddd5f05595"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.792749 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" event={"ID":"d4b3a192-0bab-4362-8737-f4725a4ff976","Type":"ContainerStarted","Data":"4f2ecb536952aeb74c6edd566573b2da6ff00e27856c5d858ebaf5b2e30b9034"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.793357 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.807650 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-45h24" event={"ID":"20a02c6c-b2eb-46c5-8568-29b7e394ad6e","Type":"ContainerStarted","Data":"c6373d1a5351fdaaf1c5e0ca4c139979e7086ddd7a6e85067f85c946d525d7bb"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.847581 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" event={"ID":"a36823c9-41a3-4eb3-9e85-634f33d00f85","Type":"ContainerStarted","Data":"3d8a38d4899e44c68b7518e4a5fa02ddf5e1a50722f8df7796caad05aa5e2bb2"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.848602 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" event={"ID":"a36823c9-41a3-4eb3-9e85-634f33d00f85","Type":"ContainerStarted","Data":"0f71aba128312dd96fef51a395d0adeb19a3e934aeb38c19cb4535a3433a33eb"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.869538 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g462z" event={"ID":"8dee4c98-7d07-4923-a8e5-47d322cc35d6","Type":"ContainerStarted","Data":"68313f275a203e1c75abded591919290c9d37ea4068f6405457a558b7802eb9d"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.881418 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" event={"ID":"7e529814-a09b-4dff-b79d-5525a16ce269","Type":"ContainerStarted","Data":"f74516e9b138c39c073d996ecef7b7ef11847815314e0a97124078748dcdb4d4"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.885462 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vcfc9" podStartSLOduration=123.885449222 podStartE2EDuration="2m3.885449222s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.882419357 +0000 UTC m=+143.391900334" watchObservedRunningTime="2025-12-11 13:55:29.885449222 +0000 UTC m=+143.394930199" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.887828 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.888224 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.388211629 +0000 UTC m=+143.897692606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.924727 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" podStartSLOduration=123.924702302 podStartE2EDuration="2m3.924702302s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.920021761 +0000 UTC m=+143.429502748" watchObservedRunningTime="2025-12-11 13:55:29.924702302 +0000 UTC m=+143.434183279" Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.987938 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" event={"ID":"f85cbfdb-67a4-47f3-9087-b76b960fbc62","Type":"ContainerStarted","Data":"d209b8bd4279abdfeedcbfcc1d1ef678664d6e5682c0f47b5f93d4d590311863"} Dec 11 13:55:29 crc kubenswrapper[4924]: I1211 13:55:29.989235 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:29 crc kubenswrapper[4924]: E1211 13:55:29.990347 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.49031335 +0000 UTC m=+143.999794327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.014574 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" podStartSLOduration=124.014560289 podStartE2EDuration="2m4.014560289s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.012498881 +0000 UTC m=+143.521979858" watchObservedRunningTime="2025-12-11 13:55:30.014560289 +0000 UTC m=+143.524041266" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.014746 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-g7gp9" podStartSLOduration=124.014741314 podStartE2EDuration="2m4.014741314s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:29.973397096 +0000 UTC m=+143.482878063" watchObservedRunningTime="2025-12-11 13:55:30.014741314 +0000 UTC m=+143.524222291" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.026508 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" event={"ID":"71f847e8-3ec3-43fb-a33e-6c71a867e160","Type":"ContainerStarted","Data":"4070b20edb3ee2b1266b931c947d5f87f7615f839035462a2fc674d168e732d7"} Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.046104 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" event={"ID":"488b109d-f524-4b78-a1a9-d07a1178236d","Type":"ContainerStarted","Data":"ee6dcd39520d7ac67c7e07cde38e0a910347c0731a877e771899f351281f0b43"} Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.051451 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-g462z" podStartSLOduration=124.051431012 podStartE2EDuration="2m4.051431012s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.044744395 +0000 UTC m=+143.554225372" watchObservedRunningTime="2025-12-11 13:55:30.051431012 +0000 UTC m=+143.560911989" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.090961 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.092036 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.592020329 +0000 UTC m=+144.101501306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.136505 4924 patch_prober.go:28] interesting pod/downloads-7954f5f757-9dvjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.136542 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9dvjv" podUID="4c3c7d59-0131-4a77-9828-7a78ff18a8ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.136579 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" event={"ID":"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502","Type":"ContainerStarted","Data":"fe4c9d5ed79f6551b79b01672121653eb506cfa4f36ca1eb29aff0b21f97e286"} Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.136599 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" event={"ID":"f1324f2e-d3ea-444d-9e6c-ff3ea41d6502","Type":"ContainerStarted","Data":"4503364fcffc6d2a4fc0016d2f4ef0bfdf3795864a06a7a428482f3cd9786f1f"} Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.161569 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" podStartSLOduration=124.161553387 podStartE2EDuration="2m4.161553387s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.098918673 +0000 UTC m=+143.608399650" watchObservedRunningTime="2025-12-11 13:55:30.161553387 +0000 UTC m=+143.671034354" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.161796 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-25w25" podStartSLOduration=124.161791694 podStartE2EDuration="2m4.161791694s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.15987546 +0000 UTC m=+143.669356447" watchObservedRunningTime="2025-12-11 13:55:30.161791694 +0000 UTC m=+143.671272671" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.211040 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.212736 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.712721271 +0000 UTC m=+144.222202248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.227470 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.229634 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.246174 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:30 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:30 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:30 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.246231 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.252165 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-twjdt" podStartSLOduration=125.252148315 podStartE2EDuration="2m5.252148315s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.251799136 +0000 UTC m=+143.761280113" watchObservedRunningTime="2025-12-11 13:55:30.252148315 +0000 UTC m=+143.761629292" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.253495 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" podStartSLOduration=124.253488213 podStartE2EDuration="2m4.253488213s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.212579937 +0000 UTC m=+143.722060914" watchObservedRunningTime="2025-12-11 13:55:30.253488213 +0000 UTC m=+143.762969190" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.303428 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" podStartSLOduration=124.303405391 podStartE2EDuration="2m4.303405391s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.295040417 +0000 UTC m=+143.804521384" watchObservedRunningTime="2025-12-11 13:55:30.303405391 +0000 UTC m=+143.812886378" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.314623 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-p7rww" podStartSLOduration=124.314606985 podStartE2EDuration="2m4.314606985s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.31298227 +0000 UTC m=+143.822463237" watchObservedRunningTime="2025-12-11 13:55:30.314606985 +0000 UTC m=+143.824087962" Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.315131 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.81511991 +0000 UTC m=+144.324600887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.314906 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.364205 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dd9cn" podStartSLOduration=124.364188304 podStartE2EDuration="2m4.364188304s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.357011023 +0000 UTC m=+143.866492000" watchObservedRunningTime="2025-12-11 13:55:30.364188304 +0000 UTC m=+143.873669281" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.422508 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.422857 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:30.922840778 +0000 UTC m=+144.432321755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.526922 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.527394 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.027379106 +0000 UTC m=+144.536860083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.628958 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.629519 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.129496617 +0000 UTC m=+144.638977594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.629651 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.630002 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.129991931 +0000 UTC m=+144.639472908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.730959 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.731111 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.231089843 +0000 UTC m=+144.740570830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.731217 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.731569 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.231558737 +0000 UTC m=+144.741039714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.832092 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.832405 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.332384071 +0000 UTC m=+144.841865048 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.832529 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.832856 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.332849164 +0000 UTC m=+144.842330141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.874727 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.875085 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:30 crc kubenswrapper[4924]: I1211 13:55:30.933612 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:30 crc kubenswrapper[4924]: E1211 13:55:30.933913 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.433898675 +0000 UTC m=+144.943379642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.035294 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.035630 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.535619035 +0000 UTC m=+145.045100012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.139924 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.140598 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.640581966 +0000 UTC m=+145.150062943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.151444 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" event={"ID":"2fe39169-62e5-4364-bf39-1bd6cccbf231","Type":"ContainerStarted","Data":"749cf0f93eb3bd3f18f6675a0244da3dc2d3d6f6fc856dbdc0e1ea0f9f7f146b"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.151489 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" event={"ID":"2fe39169-62e5-4364-bf39-1bd6cccbf231","Type":"ContainerStarted","Data":"28952d46f55e5fe9244ce274253860437b28ab106f963959f8995dedfcfc94ff"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.153497 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-45h24" event={"ID":"20a02c6c-b2eb-46c5-8568-29b7e394ad6e","Type":"ContainerStarted","Data":"a57e79ee0e5e85041f54bcabefc74b8be90816a95da943b599b93d7fbb5ffb3d"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.154895 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" event={"ID":"82d94e21-1c16-4233-a399-e34c89240e6d","Type":"ContainerStarted","Data":"c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.155540 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.156644 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" event={"ID":"fd740900-c96d-4342-bedb-c8f5ef2acc49","Type":"ContainerStarted","Data":"371e1ff8c0d106cdef164ed715d77af0b1d7f55711517b803d3bd0445c69542f"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.157472 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-t2zg5" podStartSLOduration=126.157458139 podStartE2EDuration="2m6.157458139s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:30.398376452 +0000 UTC m=+143.907857449" watchObservedRunningTime="2025-12-11 13:55:31.157458139 +0000 UTC m=+144.666939116" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.157814 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hmslv"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.157951 4924 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-p7n6k container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.42:6443/healthz\": dial tcp 10.217.0.42:6443: connect: connection refused" start-of-body= Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.158002 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.42:6443/healthz\": dial tcp 10.217.0.42:6443: connect: connection refused" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.158816 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" event={"ID":"f12434db-4aac-48f2-9911-5c04ad6b461d","Type":"ContainerStarted","Data":"c27c30133ead438b987d5ddb12c4cc9f7e4fb4e1c8d886113581932b56afcd38"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.158903 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.163781 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" event={"ID":"768953b9-4fd1-45ac-bb0c-11dde7a825ff","Type":"ContainerStarted","Data":"80db733d5792f9eefdbdaf476a3786b3fc7d39ee88e09ec5cf58ac109aedbb50"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.163806 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" event={"ID":"768953b9-4fd1-45ac-bb0c-11dde7a825ff","Type":"ContainerStarted","Data":"671e519f03584f294d7cf90b700f11e685d3fb747274dd5df9f0ff5d11ce403a"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.166454 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" event={"ID":"487ceb97-cea5-4120-900d-dcd405f2f574","Type":"ContainerStarted","Data":"df486983cc6b40d3194ea4a6fed0fe60365a57c36125d0294d6d4ee090b09a7a"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.167655 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-g462z" event={"ID":"8dee4c98-7d07-4923-a8e5-47d322cc35d6","Type":"ContainerStarted","Data":"524ecfe7600f30d4987a9b72b611cd3ea0449801fcd2516f76e66361b326dff6"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.168394 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.174865 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" event={"ID":"d4b3a192-0bab-4362-8737-f4725a4ff976","Type":"ContainerStarted","Data":"f140b8e76e480be7cadd1ca05c673ec37d94198634a11452470559407327b94f"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.179725 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" event={"ID":"a36823c9-41a3-4eb3-9e85-634f33d00f85","Type":"ContainerStarted","Data":"d84462a14c4437117c1e4449e0719ed497157daa8aa8425ecc4da3884720c1df"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.180860 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerStarted","Data":"e74bd6508a2a26a4928d2df3576043ee518d19da2b39c07e7d3fb632b1eaa428"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.181067 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.184653 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hmslv"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.184773 4924 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dvnc9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.184804 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.185969 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j96gb" event={"ID":"13f0d8ea-06bd-477e-86ec-bc46704d8784","Type":"ContainerStarted","Data":"b1bb9b58b4a27788a257e00607e5a968805dbdc93b7e7c80d3a638c8ef54c599"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.185999 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-j96gb" event={"ID":"13f0d8ea-06bd-477e-86ec-bc46704d8784","Type":"ContainerStarted","Data":"008b505e5ec4bff50dec00fac992d95dea9abe096c39d9a54b4104d514006793"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.186108 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.193616 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" event={"ID":"f85cbfdb-67a4-47f3-9087-b76b960fbc62","Type":"ContainerStarted","Data":"c2e897f08a11f03649ee247921a44ddf86e0d41a80e50968ba69cc11176e747f"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.198190 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rxbwv" podStartSLOduration=125.198088177 podStartE2EDuration="2m5.198088177s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.190098193 +0000 UTC m=+144.699579170" watchObservedRunningTime="2025-12-11 13:55:31.198088177 +0000 UTC m=+144.707569164" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.209644 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mj5lw" event={"ID":"b7c8f8d0-10da-4d63-847f-bb9f584c2ff1","Type":"ContainerStarted","Data":"66786c0807b811c80ff963ed4b6a2165865d2e40bd08ad2e58e82e977253e8c2"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.212089 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" event={"ID":"0a5809bc-0192-4557-b2db-0f011c4a2cd0","Type":"ContainerStarted","Data":"735f74b642c81fe6c7c51832a620348c27b90027e2c5170fe8552bd3eaadd881"} Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.243592 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podStartSLOduration=125.243577721 podStartE2EDuration="2m5.243577721s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.241932445 +0000 UTC m=+144.751413422" watchObservedRunningTime="2025-12-11 13:55:31.243577721 +0000 UTC m=+144.753058698" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.247656 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-catalog-content\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.248076 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.248125 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjg2p\" (UniqueName: \"kubernetes.io/projected/4b178dc2-db02-45b7-a589-b1e71d29c50e-kube-api-access-rjg2p\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.248241 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-utilities\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.252165 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.752151312 +0000 UTC m=+145.261632279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.269558 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:31 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:31 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:31 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.269628 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.273702 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.273958 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mg9gr" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.297057 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nvxls" podStartSLOduration=125.297042439 podStartE2EDuration="2m5.297042439s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.293725766 +0000 UTC m=+144.803206753" watchObservedRunningTime="2025-12-11 13:55:31.297042439 +0000 UTC m=+144.806523416" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.297539 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkv96"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.298484 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.303752 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-qqvrl" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.305414 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.354050 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.354754 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjg2p\" (UniqueName: \"kubernetes.io/projected/4b178dc2-db02-45b7-a589-b1e71d29c50e-kube-api-access-rjg2p\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.355049 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-utilities\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.355487 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-catalog-content\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.356090 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.856069023 +0000 UTC m=+145.365550000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.359461 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-utilities\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.365782 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkv96"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.382027 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-catalog-content\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.400177 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-45h24" podStartSLOduration=8.400145668 podStartE2EDuration="8.400145668s" podCreationTimestamp="2025-12-11 13:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.345088975 +0000 UTC m=+144.854569952" watchObservedRunningTime="2025-12-11 13:55:31.400145668 +0000 UTC m=+144.909626645" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.448292 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjg2p\" (UniqueName: \"kubernetes.io/projected/4b178dc2-db02-45b7-a589-b1e71d29c50e-kube-api-access-rjg2p\") pod \"certified-operators-hmslv\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.465969 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.466054 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-catalog-content\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.466076 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-utilities\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.466118 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2z6m\" (UniqueName: \"kubernetes.io/projected/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-kube-api-access-z2z6m\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.466399 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:31.966387774 +0000 UTC m=+145.475868751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.473817 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.484780 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-86ngv"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.485901 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.524817 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86ngv"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.537606 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" podStartSLOduration=125.537592719 podStartE2EDuration="2m5.537592719s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.536773906 +0000 UTC m=+145.046254883" watchObservedRunningTime="2025-12-11 13:55:31.537592719 +0000 UTC m=+145.047073696" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585599 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585698 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvbc\" (UniqueName: \"kubernetes.io/projected/592cde8b-91b2-49aa-a607-c84a02074d89-kube-api-access-dfvbc\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585732 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-utilities\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585804 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-catalog-content\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585821 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-utilities\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585838 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-catalog-content\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.585873 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2z6m\" (UniqueName: \"kubernetes.io/projected/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-kube-api-access-z2z6m\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.586188 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.08617301 +0000 UTC m=+145.595653987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.586558 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-catalog-content\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.586771 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-utilities\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.635571 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pb2zr" podStartSLOduration=125.635553313 podStartE2EDuration="2m5.635553313s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.573568196 +0000 UTC m=+145.083049163" watchObservedRunningTime="2025-12-11 13:55:31.635553313 +0000 UTC m=+145.145034290" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.636961 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x8bzp" podStartSLOduration=125.636954752 podStartE2EDuration="2m5.636954752s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.634675368 +0000 UTC m=+145.144156345" watchObservedRunningTime="2025-12-11 13:55:31.636954752 +0000 UTC m=+145.146435729" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.651229 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2z6m\" (UniqueName: \"kubernetes.io/projected/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-kube-api-access-z2z6m\") pod \"community-operators-tkv96\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.659441 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-j96gb" podStartSLOduration=8.659420922 podStartE2EDuration="8.659420922s" podCreationTimestamp="2025-12-11 13:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.658689191 +0000 UTC m=+145.168170168" watchObservedRunningTime="2025-12-11 13:55:31.659420922 +0000 UTC m=+145.168901909" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.690349 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-catalog-content\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.690433 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfvbc\" (UniqueName: \"kubernetes.io/projected/592cde8b-91b2-49aa-a607-c84a02074d89-kube-api-access-dfvbc\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.690467 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-utilities\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.690495 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.690884 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.190870173 +0000 UTC m=+145.700351170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.691473 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-catalog-content\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.691864 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bgbpp"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.691987 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-utilities\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.692688 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.695716 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hdnq" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.697728 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-wzsqk" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.726511 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rkln8" podStartSLOduration=125.72648135 podStartE2EDuration="2m5.72648135s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.701537812 +0000 UTC m=+145.211018789" watchObservedRunningTime="2025-12-11 13:55:31.72648135 +0000 UTC m=+145.235962327" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.728585 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgbpp"] Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.738686 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7t6t7" podStartSLOduration=125.738660712 podStartE2EDuration="2m5.738660712s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:31.736682876 +0000 UTC m=+145.246163853" watchObservedRunningTime="2025-12-11 13:55:31.738660712 +0000 UTC m=+145.248141689" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.775510 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfvbc\" (UniqueName: \"kubernetes.io/projected/592cde8b-91b2-49aa-a607-c84a02074d89-kube-api-access-dfvbc\") pod \"certified-operators-86ngv\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.820202 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.830202 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.330167655 +0000 UTC m=+145.839648632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.865139 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.892443 4924 patch_prober.go:28] interesting pod/apiserver-76f77b778f-nf4pv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]log ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]etcd ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/max-in-flight-filter ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 13:55:31 crc kubenswrapper[4924]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 13:55:31 crc kubenswrapper[4924]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/openshift.io-startinformers ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 13:55:31 crc kubenswrapper[4924]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 13:55:31 crc kubenswrapper[4924]: livez check failed Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.892511 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" podUID="7e529814-a09b-4dff-b79d-5525a16ce269" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.938746 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.967589 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.967650 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-utilities\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.967692 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-catalog-content\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:31 crc kubenswrapper[4924]: I1211 13:55:31.967758 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxzpg\" (UniqueName: \"kubernetes.io/projected/5644036b-4e4a-4ec8-b8e4-87db71012482-kube-api-access-jxzpg\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:31 crc kubenswrapper[4924]: E1211 13:55:31.978519 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.478499281 +0000 UTC m=+145.987980258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.058297 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.071010 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.071202 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.571181218 +0000 UTC m=+146.080662195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.071250 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxzpg\" (UniqueName: \"kubernetes.io/projected/5644036b-4e4a-4ec8-b8e4-87db71012482-kube-api-access-jxzpg\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.071369 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.071397 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-utilities\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.071429 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-catalog-content\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.072305 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-catalog-content\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.072444 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.572427683 +0000 UTC m=+146.081908660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.072603 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-utilities\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.089760 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bp59t" podStartSLOduration=126.089744418 podStartE2EDuration="2m6.089744418s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:32.089630544 +0000 UTC m=+145.599111521" watchObservedRunningTime="2025-12-11 13:55:32.089744418 +0000 UTC m=+145.599225395" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.116226 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxzpg\" (UniqueName: \"kubernetes.io/projected/5644036b-4e4a-4ec8-b8e4-87db71012482-kube-api-access-jxzpg\") pod \"community-operators-bgbpp\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.173847 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.174275 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.674256565 +0000 UTC m=+146.183737542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.240363 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:32 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:32 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:32 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.240408 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.254503 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" event={"ID":"0a5809bc-0192-4557-b2db-0f011c4a2cd0","Type":"ContainerStarted","Data":"5ff3a8329ae3a000359bd7a5d935fff48ec543667077da7d2e51e347e13ef82b"} Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.256387 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" podUID="b94a998f-0317-4fb2-9633-c68c86337a93" containerName="controller-manager" containerID="cri-o://8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759" gracePeriod=30 Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.262732 4924 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dvnc9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.262771 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.290026 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.290304 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.790293076 +0000 UTC m=+146.299774053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.295255 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hmslv"] Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.305679 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-77gwr" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.338826 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.392203 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.393725 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.893709244 +0000 UTC m=+146.403190221 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.494089 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.494654 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:32.994642831 +0000 UTC m=+146.504123808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.596284 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.596736 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.096721661 +0000 UTC m=+146.606202638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.645092 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-86ngv"] Dec 11 13:55:32 crc kubenswrapper[4924]: W1211 13:55:32.658848 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod592cde8b_91b2_49aa_a607_c84a02074d89.slice/crio-4feb793a56492a3a497ac1b733af1c90978018554a123d3c565bdc43dfbbed29 WatchSource:0}: Error finding container 4feb793a56492a3a497ac1b733af1c90978018554a123d3c565bdc43dfbbed29: Status 404 returned error can't find the container with id 4feb793a56492a3a497ac1b733af1c90978018554a123d3c565bdc43dfbbed29 Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.698563 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.699176 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.199165111 +0000 UTC m=+146.708646088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.730163 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkv96"] Dec 11 13:55:32 crc kubenswrapper[4924]: W1211 13:55:32.742358 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e98271d_3b5f_4c0d_963f_3d4ec1e0aad1.slice/crio-bd1864157f70e7bc1ec7894083cc944d5a1bcbf982d5edaab56cdbd4dfa05527 WatchSource:0}: Error finding container bd1864157f70e7bc1ec7894083cc944d5a1bcbf982d5edaab56cdbd4dfa05527: Status 404 returned error can't find the container with id bd1864157f70e7bc1ec7894083cc944d5a1bcbf982d5edaab56cdbd4dfa05527 Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.766839 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bgbpp"] Dec 11 13:55:32 crc kubenswrapper[4924]: W1211 13:55:32.774512 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5644036b_4e4a_4ec8_b8e4_87db71012482.slice/crio-6d7ac09507384ceef977ba9168c6449a7b36a21120b2350b2744ac2d8d41203a WatchSource:0}: Error finding container 6d7ac09507384ceef977ba9168c6449a7b36a21120b2350b2744ac2d8d41203a: Status 404 returned error can't find the container with id 6d7ac09507384ceef977ba9168c6449a7b36a21120b2350b2744ac2d8d41203a Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.799489 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.800190 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.300171701 +0000 UTC m=+146.809652678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.882196 4924 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 13:55:32 crc kubenswrapper[4924]: I1211 13:55:32.902133 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:32 crc kubenswrapper[4924]: E1211 13:55:32.902481 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.402469036 +0000 UTC m=+146.911950013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.003452 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:33 crc kubenswrapper[4924]: E1211 13:55:33.003643 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.50361582 +0000 UTC m=+147.013096797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.004127 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:33 crc kubenswrapper[4924]: E1211 13:55:33.004476 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:55:33.504462074 +0000 UTC m=+147.013943051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vwnfk" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.023988 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.051933 4924 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T13:55:32.882221809Z","Handler":null,"Name":""} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.066895 4924 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.066936 4924 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.086589 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fvkwf"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.087480 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.096710 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.103523 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvkwf"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.104685 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.139773 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.206002 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-utilities\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.206076 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-catalog-content\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.206152 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txg9z\" (UniqueName: \"kubernetes.io/projected/f678bb40-07bb-4ae9-a317-4d06821f518a-kube-api-access-txg9z\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.206196 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.237527 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:33 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:33 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:33 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.237959 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.248348 4924 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.248386 4924 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.259557 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.265516 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" event={"ID":"0a5809bc-0192-4557-b2db-0f011c4a2cd0","Type":"ContainerStarted","Data":"f23e21f0a35233eafa08741cd6e7363618b5a24d117706634baba0e59467d1f6"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.271934 4924 generic.go:334] "Generic (PLEG): container finished" podID="b94a998f-0317-4fb2-9633-c68c86337a93" containerID="8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759" exitCode=0 Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.272013 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" event={"ID":"b94a998f-0317-4fb2-9633-c68c86337a93","Type":"ContainerDied","Data":"8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.272040 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" event={"ID":"b94a998f-0317-4fb2-9633-c68c86337a93","Type":"ContainerDied","Data":"cd8f7b1d868921ea1569574f669d34f2a01d8e7c440ecb934d7a339a4b208e02"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.272055 4924 scope.go:117] "RemoveContainer" containerID="8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.272193 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6cqwz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.278684 4924 generic.go:334] "Generic (PLEG): container finished" podID="592cde8b-91b2-49aa-a607-c84a02074d89" containerID="03cddd9f47eea030acac372b51ffcd51cf9321e5f3bf0301071cc61bf0a14ed2" exitCode=0 Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.278923 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ngv" event={"ID":"592cde8b-91b2-49aa-a607-c84a02074d89","Type":"ContainerDied","Data":"03cddd9f47eea030acac372b51ffcd51cf9321e5f3bf0301071cc61bf0a14ed2"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.278979 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ngv" event={"ID":"592cde8b-91b2-49aa-a607-c84a02074d89","Type":"ContainerStarted","Data":"4feb793a56492a3a497ac1b733af1c90978018554a123d3c565bdc43dfbbed29"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.296158 4924 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.296654 4924 generic.go:334] "Generic (PLEG): container finished" podID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerID="4f08dbfae0e6c98ece06338dcbf839d5e0082fbd03b5563a125682811f3b6037" exitCode=0 Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.296707 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmslv" event={"ID":"4b178dc2-db02-45b7-a589-b1e71d29c50e","Type":"ContainerDied","Data":"4f08dbfae0e6c98ece06338dcbf839d5e0082fbd03b5563a125682811f3b6037"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.296731 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmslv" event={"ID":"4b178dc2-db02-45b7-a589-b1e71d29c50e","Type":"ContainerStarted","Data":"9a7fbce4568c8c3146955dfd5f13b6d55c849d1638203297ad75be54726df882"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.296757 4924 scope.go:117] "RemoveContainer" containerID="8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.297492 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzzhz"] Dec 11 13:55:33 crc kubenswrapper[4924]: E1211 13:55:33.297708 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b94a998f-0317-4fb2-9633-c68c86337a93" containerName="controller-manager" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.297720 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="b94a998f-0317-4fb2-9633-c68c86337a93" containerName="controller-manager" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.297838 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="b94a998f-0317-4fb2-9633-c68c86337a93" containerName="controller-manager" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.298270 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: E1211 13:55:33.303622 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759\": container with ID starting with 8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759 not found: ID does not exist" containerID="8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.303654 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759"} err="failed to get container status \"8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759\": rpc error: code = NotFound desc = could not find container \"8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759\": container with ID starting with 8652b80fe2968ea312c955c0f279dbc0a9dd5ecd7adbed176a6b265e7ac85759 not found: ID does not exist" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.311678 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-client-ca\") pod \"b94a998f-0317-4fb2-9633-c68c86337a93\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.311733 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpbzc\" (UniqueName: \"kubernetes.io/projected/b94a998f-0317-4fb2-9633-c68c86337a93-kube-api-access-jpbzc\") pod \"b94a998f-0317-4fb2-9633-c68c86337a93\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.311803 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-proxy-ca-bundles\") pod \"b94a998f-0317-4fb2-9633-c68c86337a93\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.311831 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-config\") pod \"b94a998f-0317-4fb2-9633-c68c86337a93\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.311911 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b94a998f-0317-4fb2-9633-c68c86337a93-serving-cert\") pod \"b94a998f-0317-4fb2-9633-c68c86337a93\" (UID: \"b94a998f-0317-4fb2-9633-c68c86337a93\") " Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.312059 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-utilities\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.312115 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-catalog-content\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.312194 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txg9z\" (UniqueName: \"kubernetes.io/projected/f678bb40-07bb-4ae9-a317-4d06821f518a-kube-api-access-txg9z\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.317051 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b94a998f-0317-4fb2-9633-c68c86337a93" (UID: "b94a998f-0317-4fb2-9633-c68c86337a93"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.317068 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-config" (OuterVolumeSpecName: "config") pod "b94a998f-0317-4fb2-9633-c68c86337a93" (UID: "b94a998f-0317-4fb2-9633-c68c86337a93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.317125 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-utilities\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.317346 4924 generic.go:334] "Generic (PLEG): container finished" podID="c76ed578-a747-48e3-9653-0e07b782ba8e" containerID="b86cb4a8808f34ee790a661917facd75b9b46b89ae998b50dca1432504e1e4c4" exitCode=0 Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.317356 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-catalog-content\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.317376 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" event={"ID":"c76ed578-a747-48e3-9653-0e07b782ba8e","Type":"ContainerDied","Data":"b86cb4a8808f34ee790a661917facd75b9b46b89ae998b50dca1432504e1e4c4"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.318397 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-client-ca" (OuterVolumeSpecName: "client-ca") pod "b94a998f-0317-4fb2-9633-c68c86337a93" (UID: "b94a998f-0317-4fb2-9633-c68c86337a93"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.327248 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgbpp" event={"ID":"5644036b-4e4a-4ec8-b8e4-87db71012482","Type":"ContainerStarted","Data":"6d7ac09507384ceef977ba9168c6449a7b36a21120b2350b2744ac2d8d41203a"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.335624 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzzhz"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.352270 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkv96" event={"ID":"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1","Type":"ContainerStarted","Data":"bd1864157f70e7bc1ec7894083cc944d5a1bcbf982d5edaab56cdbd4dfa05527"} Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.352898 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94a998f-0317-4fb2-9633-c68c86337a93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b94a998f-0317-4fb2-9633-c68c86337a93" (UID: "b94a998f-0317-4fb2-9633-c68c86337a93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.354393 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txg9z\" (UniqueName: \"kubernetes.io/projected/f678bb40-07bb-4ae9-a317-4d06821f518a-kube-api-access-txg9z\") pod \"redhat-marketplace-fvkwf\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.358734 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94a998f-0317-4fb2-9633-c68c86337a93-kube-api-access-jpbzc" (OuterVolumeSpecName: "kube-api-access-jpbzc") pod "b94a998f-0317-4fb2-9633-c68c86337a93" (UID: "b94a998f-0317-4fb2-9633-c68c86337a93"). InnerVolumeSpecName "kube-api-access-jpbzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.363263 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.384693 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vwnfk\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.413514 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfceb2d6-c7d2-4447-b56d-b71db58955eb-serving-cert\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.413646 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.413773 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-config\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.413826 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-client-ca\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.413852 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk4dh\" (UniqueName: \"kubernetes.io/projected/bfceb2d6-c7d2-4447-b56d-b71db58955eb-kube-api-access-zk4dh\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.413983 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b94a998f-0317-4fb2-9633-c68c86337a93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.414003 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.414015 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpbzc\" (UniqueName: \"kubernetes.io/projected/b94a998f-0317-4fb2-9633-c68c86337a93-kube-api-access-jpbzc\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.414024 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.414032 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94a998f-0317-4fb2-9633-c68c86337a93-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.451086 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.476357 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kwx68"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.481704 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.489726 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwx68"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.515505 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-client-ca\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.515792 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk4dh\" (UniqueName: \"kubernetes.io/projected/bfceb2d6-c7d2-4447-b56d-b71db58955eb-kube-api-access-zk4dh\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.515916 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfceb2d6-c7d2-4447-b56d-b71db58955eb-serving-cert\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.516016 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.516103 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-config\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.517790 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.518026 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-config\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.518433 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-client-ca\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.519600 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfceb2d6-c7d2-4447-b56d-b71db58955eb-serving-cert\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.537066 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk4dh\" (UniqueName: \"kubernetes.io/projected/bfceb2d6-c7d2-4447-b56d-b71db58955eb-kube-api-access-zk4dh\") pod \"controller-manager-879f6c89f-nzzhz\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.563824 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.606527 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6cqwz"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.613265 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6cqwz"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.617088 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-utilities\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.617126 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-catalog-content\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.617179 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdt9c\" (UniqueName: \"kubernetes.io/projected/87b9a0d8-b3c8-4076-bf25-a56dd799870c-kube-api-access-bdt9c\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.629446 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.673733 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvkwf"] Dec 11 13:55:33 crc kubenswrapper[4924]: W1211 13:55:33.681269 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf678bb40_07bb_4ae9_a317_4d06821f518a.slice/crio-4dda7734f2f90c1d39823488289e87a7b3e2deccd5c8d8a6e8e92911b19deabc WatchSource:0}: Error finding container 4dda7734f2f90c1d39823488289e87a7b3e2deccd5c8d8a6e8e92911b19deabc: Status 404 returned error can't find the container with id 4dda7734f2f90c1d39823488289e87a7b3e2deccd5c8d8a6e8e92911b19deabc Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.717849 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.717905 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.717929 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-utilities\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.717945 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-catalog-content\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.717975 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.717992 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.718015 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdt9c\" (UniqueName: \"kubernetes.io/projected/87b9a0d8-b3c8-4076-bf25-a56dd799870c-kube-api-access-bdt9c\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.719140 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-catalog-content\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.719491 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-utilities\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.721672 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.723039 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.727093 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.744955 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.749154 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdt9c\" (UniqueName: \"kubernetes.io/projected/87b9a0d8-b3c8-4076-bf25-a56dd799870c-kube-api-access-bdt9c\") pod \"redhat-marketplace-kwx68\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.768018 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwnfk"] Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.823624 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.837742 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.877620 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.902119 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:55:33 crc kubenswrapper[4924]: I1211 13:55:33.965567 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzzhz"] Dec 11 13:55:34 crc kubenswrapper[4924]: W1211 13:55:34.037763 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfceb2d6_c7d2_4447_b56d_b71db58955eb.slice/crio-26d4f08c9a5232d200da5ff969ee0fa4144dc8454ed41f0da97d71ee18262b1f WatchSource:0}: Error finding container 26d4f08c9a5232d200da5ff969ee0fa4144dc8454ed41f0da97d71ee18262b1f: Status 404 returned error can't find the container with id 26d4f08c9a5232d200da5ff969ee0fa4144dc8454ed41f0da97d71ee18262b1f Dec 11 13:55:34 crc kubenswrapper[4924]: W1211 13:55:34.210175 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-dad5c2259e366cf87b8b2a447db9c7346fa8a7d129f8a6ace94466e3fd536b59 WatchSource:0}: Error finding container dad5c2259e366cf87b8b2a447db9c7346fa8a7d129f8a6ace94466e3fd536b59: Status 404 returned error can't find the container with id dad5c2259e366cf87b8b2a447db9c7346fa8a7d129f8a6ace94466e3fd536b59 Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.228764 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:34 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:34 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:34 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.228879 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:34 crc kubenswrapper[4924]: W1211 13:55:34.252179 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-c323c9a139088d4fcc1efef4ea2efe0e910911ff45057859d4cba8d588c6244f WatchSource:0}: Error finding container c323c9a139088d4fcc1efef4ea2efe0e910911ff45057859d4cba8d588c6244f: Status 404 returned error can't find the container with id c323c9a139088d4fcc1efef4ea2efe0e910911ff45057859d4cba8d588c6244f Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.259230 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwx68"] Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.365946 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvkwf" event={"ID":"f678bb40-07bb-4ae9-a317-4d06821f518a","Type":"ContainerStarted","Data":"4dda7734f2f90c1d39823488289e87a7b3e2deccd5c8d8a6e8e92911b19deabc"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.367966 4924 generic.go:334] "Generic (PLEG): container finished" podID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerID="603d1a1d803a2351a9974054e97af0efe401ec0340e5e84f35b44b52ed526b91" exitCode=0 Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.368039 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkv96" event={"ID":"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1","Type":"ContainerDied","Data":"603d1a1d803a2351a9974054e97af0efe401ec0340e5e84f35b44b52ed526b91"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.370009 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" event={"ID":"bfceb2d6-c7d2-4447-b56d-b71db58955eb","Type":"ContainerStarted","Data":"26d4f08c9a5232d200da5ff969ee0fa4144dc8454ed41f0da97d71ee18262b1f"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.372288 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c323c9a139088d4fcc1efef4ea2efe0e910911ff45057859d4cba8d588c6244f"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.374771 4924 generic.go:334] "Generic (PLEG): container finished" podID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerID="a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1" exitCode=0 Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.374849 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgbpp" event={"ID":"5644036b-4e4a-4ec8-b8e4-87db71012482","Type":"ContainerDied","Data":"a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.376167 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dad5c2259e366cf87b8b2a447db9c7346fa8a7d129f8a6ace94466e3fd536b59"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.377357 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwx68" event={"ID":"87b9a0d8-b3c8-4076-bf25-a56dd799870c","Type":"ContainerStarted","Data":"7021507234a8d73ca948eb697d783ee6fdda83cdb4710e6790698e93d8a03b1f"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.380222 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" event={"ID":"0a5809bc-0192-4557-b2db-0f011c4a2cd0","Type":"ContainerStarted","Data":"a49d3583e9dccfd115d7f7c77779b8240a75a5571eec38ce7244eb9a139f0a63"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.383473 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" event={"ID":"167e3306-54e1-470a-a7d6-55b2742ca45e","Type":"ContainerStarted","Data":"30c21cbe9726d1ac8840b7a60e4ddeefd65a38e67fcb564bc90e4725abd6cfe5"} Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.472642 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xvm8"] Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.474049 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.477301 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.479366 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xvm8"] Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.537959 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9pv\" (UniqueName: \"kubernetes.io/projected/bceef104-5373-46a2-b7d9-5cc5782449f6-kube-api-access-tb9pv\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.538000 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-catalog-content\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.538038 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-utilities\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.628319 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.638865 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9pv\" (UniqueName: \"kubernetes.io/projected/bceef104-5373-46a2-b7d9-5cc5782449f6-kube-api-access-tb9pv\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.639209 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-catalog-content\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.639799 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-catalog-content\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.640650 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-utilities\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.640947 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-utilities\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.655406 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9pv\" (UniqueName: \"kubernetes.io/projected/bceef104-5373-46a2-b7d9-5cc5782449f6-kube-api-access-tb9pv\") pod \"redhat-operators-6xvm8\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.745066 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c76ed578-a747-48e3-9653-0e07b782ba8e-secret-volume\") pod \"c76ed578-a747-48e3-9653-0e07b782ba8e\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.745136 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c76ed578-a747-48e3-9653-0e07b782ba8e-config-volume\") pod \"c76ed578-a747-48e3-9653-0e07b782ba8e\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.745217 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbjmw\" (UniqueName: \"kubernetes.io/projected/c76ed578-a747-48e3-9653-0e07b782ba8e-kube-api-access-tbjmw\") pod \"c76ed578-a747-48e3-9653-0e07b782ba8e\" (UID: \"c76ed578-a747-48e3-9653-0e07b782ba8e\") " Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.746348 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76ed578-a747-48e3-9653-0e07b782ba8e-config-volume" (OuterVolumeSpecName: "config-volume") pod "c76ed578-a747-48e3-9653-0e07b782ba8e" (UID: "c76ed578-a747-48e3-9653-0e07b782ba8e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.750059 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c76ed578-a747-48e3-9653-0e07b782ba8e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c76ed578-a747-48e3-9653-0e07b782ba8e" (UID: "c76ed578-a747-48e3-9653-0e07b782ba8e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.753158 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76ed578-a747-48e3-9653-0e07b782ba8e-kube-api-access-tbjmw" (OuterVolumeSpecName: "kube-api-access-tbjmw") pod "c76ed578-a747-48e3-9653-0e07b782ba8e" (UID: "c76ed578-a747-48e3-9653-0e07b782ba8e"). InnerVolumeSpecName "kube-api-access-tbjmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.789750 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.790910 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94a998f-0317-4fb2-9633-c68c86337a93" path="/var/lib/kubelet/pods/b94a998f-0317-4fb2-9633-c68c86337a93/volumes" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.831301 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.847280 4924 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c76ed578-a747-48e3-9653-0e07b782ba8e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.847556 4924 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c76ed578-a747-48e3-9653-0e07b782ba8e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.847570 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbjmw\" (UniqueName: \"kubernetes.io/projected/c76ed578-a747-48e3-9653-0e07b782ba8e-kube-api-access-tbjmw\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.861112 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mfbdv"] Dec 11 13:55:34 crc kubenswrapper[4924]: E1211 13:55:34.861371 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76ed578-a747-48e3-9653-0e07b782ba8e" containerName="collect-profiles" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.861386 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76ed578-a747-48e3-9653-0e07b782ba8e" containerName="collect-profiles" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.861786 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="c76ed578-a747-48e3-9653-0e07b782ba8e" containerName="collect-profiles" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.862679 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.872210 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfbdv"] Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.948441 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mzc\" (UniqueName: \"kubernetes.io/projected/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-kube-api-access-64mzc\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.948507 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-catalog-content\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:34 crc kubenswrapper[4924]: I1211 13:55:34.948593 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-utilities\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.053453 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mzc\" (UniqueName: \"kubernetes.io/projected/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-kube-api-access-64mzc\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.053824 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-catalog-content\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.053872 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-utilities\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.054270 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-utilities\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.055031 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-catalog-content\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.105472 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mzc\" (UniqueName: \"kubernetes.io/projected/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-kube-api-access-64mzc\") pod \"redhat-operators-mfbdv\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.123118 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xvm8"] Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.221569 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.231879 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.233273 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:35 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:35 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:35 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.233353 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.246376 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-nf4pv" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.411062 4924 generic.go:334] "Generic (PLEG): container finished" podID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerID="5996e1c3421781e0e5a078296bea88ef0479cb8f88370ad112319523d0ccf7c0" exitCode=0 Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.411480 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwx68" event={"ID":"87b9a0d8-b3c8-4076-bf25-a56dd799870c","Type":"ContainerDied","Data":"5996e1c3421781e0e5a078296bea88ef0479cb8f88370ad112319523d0ccf7c0"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.427923 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" event={"ID":"bfceb2d6-c7d2-4447-b56d-b71db58955eb","Type":"ContainerStarted","Data":"74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.428305 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.433782 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerStarted","Data":"1f4921feeaf643c213428a38af3ca61358932420937b4273aba5c0b27b774013"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.445427 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" event={"ID":"c76ed578-a747-48e3-9653-0e07b782ba8e","Type":"ContainerDied","Data":"1b5e9371797f113cd782117e4fce1c14847ee4cbbbe0af0793b5be46f11c0865"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.445467 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5e9371797f113cd782117e4fce1c14847ee4cbbbe0af0793b5be46f11c0865" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.445527 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-m2qcx" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.449126 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.456028 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"76cc9d02bd7a13068588ef4b270bb97b4ca56de93f759247aad9c1b73c45c78d"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.456169 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" podStartSLOduration=5.45615772 podStartE2EDuration="5.45615772s" podCreationTimestamp="2025-12-11 13:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:35.455959155 +0000 UTC m=+148.965440132" watchObservedRunningTime="2025-12-11 13:55:35.45615772 +0000 UTC m=+148.965638697" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.463053 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"86be81166e939765e35e9b6848dd54acd5a1c890b7b2902a3f12b438542b6934"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.503795 4924 generic.go:334] "Generic (PLEG): container finished" podID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerID="a72ad10e549deac73f58d940a2212eadcb890975767eace8619ae17616f81e82" exitCode=0 Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.503872 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvkwf" event={"ID":"f678bb40-07bb-4ae9-a317-4d06821f518a","Type":"ContainerDied","Data":"a72ad10e549deac73f58d940a2212eadcb890975767eace8619ae17616f81e82"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.506341 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" event={"ID":"167e3306-54e1-470a-a7d6-55b2742ca45e","Type":"ContainerStarted","Data":"65e9869c9548122c141a4fea1c08c25652d121893d812a078c6fe687f6737425"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.506882 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.567103 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" podStartSLOduration=129.567085528 podStartE2EDuration="2m9.567085528s" podCreationTimestamp="2025-12-11 13:53:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:35.538770035 +0000 UTC m=+149.048251002" watchObservedRunningTime="2025-12-11 13:55:35.567085528 +0000 UTC m=+149.076566505" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.570595 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dc843dcdbf99f5aff53105b54239c903863d5d64c7714404d970adc19be84ce8"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.570627 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"55dbca45df4153d9c112067a3e899e4727430d59f3de671e02ddaafdaea2f7a9"} Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.570917 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.643398 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.643441 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.646441 4924 patch_prober.go:28] interesting pod/console-f9d7485db-4gbtq container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.646494 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4gbtq" podUID="4632c4d1-bc4e-41f4-89e8-4702ab9397c7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.687508 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tvrr4" podStartSLOduration=12.687490011 podStartE2EDuration="12.687490011s" podCreationTimestamp="2025-12-11 13:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:35.625367391 +0000 UTC m=+149.134848368" watchObservedRunningTime="2025-12-11 13:55:35.687490011 +0000 UTC m=+149.196970988" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.696785 4924 patch_prober.go:28] interesting pod/downloads-7954f5f757-9dvjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.696870 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9dvjv" podUID="4c3c7d59-0131-4a77-9828-7a78ff18a8ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.697170 4924 patch_prober.go:28] interesting pod/downloads-7954f5f757-9dvjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.697191 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9dvjv" podUID="4c3c7d59-0131-4a77-9828-7a78ff18a8ab" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Dec 11 13:55:35 crc kubenswrapper[4924]: I1211 13:55:35.763651 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mfbdv"] Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.224691 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.233476 4924 patch_prober.go:28] interesting pod/router-default-5444994796-mcpjh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:55:36 crc kubenswrapper[4924]: [-]has-synced failed: reason withheld Dec 11 13:55:36 crc kubenswrapper[4924]: [+]process-running ok Dec 11 13:55:36 crc kubenswrapper[4924]: healthz check failed Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.233558 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mcpjh" podUID="4fb40320-55f0-4b7c-9943-29a8abdf5943" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.634524 4924 generic.go:334] "Generic (PLEG): container finished" podID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerID="8924c5a0a758b199bc8e88aa1678bf44ba4a91651732b884484b2ecbaceb920d" exitCode=0 Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.634717 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerDied","Data":"8924c5a0a758b199bc8e88aa1678bf44ba4a91651732b884484b2ecbaceb920d"} Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.655809 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerDied","Data":"baeb1110fa8eae3faf4052d6ff507732787f376867f57f7b358d91867dcb5480"} Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.656495 4924 generic.go:334] "Generic (PLEG): container finished" podID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerID="baeb1110fa8eae3faf4052d6ff507732787f376867f57f7b358d91867dcb5480" exitCode=0 Dec 11 13:55:36 crc kubenswrapper[4924]: I1211 13:55:36.657069 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerStarted","Data":"fc74e4e26828d01c42180dd22f4fb74cf25eae8e698c332f10c78384f68483b2"} Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.235379 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.251443 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mcpjh" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.303644 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.305900 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.309881 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.310028 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.317185 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.429930 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.429992 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.532079 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.532132 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.532218 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.591437 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:37 crc kubenswrapper[4924]: I1211 13:55:37.647920 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:38 crc kubenswrapper[4924]: I1211 13:55:38.179078 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 13:55:38 crc kubenswrapper[4924]: I1211 13:55:38.716644 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2575703e-b4e1-488e-b6be-3d6d0e13e3a4","Type":"ContainerStarted","Data":"6df5f1c43e777c3a49709390317e85b136d53f6117b19de21141f2a12a162b69"} Dec 11 13:55:40 crc kubenswrapper[4924]: E1211 13:55:40.826120 4924 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.044s" Dec 11 13:55:40 crc kubenswrapper[4924]: I1211 13:55:40.892363 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 13:55:40 crc kubenswrapper[4924]: I1211 13:55:40.893503 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 13:55:40 crc kubenswrapper[4924]: I1211 13:55:40.893638 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:40 crc kubenswrapper[4924]: I1211 13:55:40.903351 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 13:55:40 crc kubenswrapper[4924]: I1211 13:55:40.903465 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.017618 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e9bd7-cfe2-49a5-832f-4442018ab42e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.017678 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e9bd7-cfe2-49a5-832f-4442018ab42e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.119089 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e9bd7-cfe2-49a5-832f-4442018ab42e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.119145 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e9bd7-cfe2-49a5-832f-4442018ab42e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.119300 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e9bd7-cfe2-49a5-832f-4442018ab42e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.170603 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e9bd7-cfe2-49a5-832f-4442018ab42e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.236896 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.689981 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 13:55:41 crc kubenswrapper[4924]: W1211 13:55:41.713775 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod264e9bd7_cfe2_49a5_832f_4442018ab42e.slice/crio-6d1a93cb053a920f69136333839e0a00d31a9648fb628469f2a7d0832f39060a WatchSource:0}: Error finding container 6d1a93cb053a920f69136333839e0a00d31a9648fb628469f2a7d0832f39060a: Status 404 returned error can't find the container with id 6d1a93cb053a920f69136333839e0a00d31a9648fb628469f2a7d0832f39060a Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.864646 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2575703e-b4e1-488e-b6be-3d6d0e13e3a4","Type":"ContainerStarted","Data":"7d65a0837799ffd9937017f1ec9c7b45dbb98de060bc8d81745de6ddeaa19e15"} Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.867821 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"264e9bd7-cfe2-49a5-832f-4442018ab42e","Type":"ContainerStarted","Data":"6d1a93cb053a920f69136333839e0a00d31a9648fb628469f2a7d0832f39060a"} Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.876298 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-j96gb" Dec 11 13:55:41 crc kubenswrapper[4924]: I1211 13:55:41.901244 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.901227974 podStartE2EDuration="4.901227974s" podCreationTimestamp="2025-12-11 13:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:41.899776584 +0000 UTC m=+155.409257561" watchObservedRunningTime="2025-12-11 13:55:41.901227974 +0000 UTC m=+155.410708951" Dec 11 13:55:42 crc kubenswrapper[4924]: I1211 13:55:42.879993 4924 generic.go:334] "Generic (PLEG): container finished" podID="2575703e-b4e1-488e-b6be-3d6d0e13e3a4" containerID="7d65a0837799ffd9937017f1ec9c7b45dbb98de060bc8d81745de6ddeaa19e15" exitCode=0 Dec 11 13:55:42 crc kubenswrapper[4924]: I1211 13:55:42.880438 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2575703e-b4e1-488e-b6be-3d6d0e13e3a4","Type":"ContainerDied","Data":"7d65a0837799ffd9937017f1ec9c7b45dbb98de060bc8d81745de6ddeaa19e15"} Dec 11 13:55:43 crc kubenswrapper[4924]: I1211 13:55:43.904267 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"264e9bd7-cfe2-49a5-832f-4442018ab42e","Type":"ContainerStarted","Data":"0f64e38101daa39a2e3622ce0e386e76db9c6baa33a58b5a115ea1848cb2e9e7"} Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.302262 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.320019 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.319999698 podStartE2EDuration="5.319999698s" podCreationTimestamp="2025-12-11 13:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:55:43.917078949 +0000 UTC m=+157.426559926" watchObservedRunningTime="2025-12-11 13:55:44.319999698 +0000 UTC m=+157.829480685" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.383633 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kubelet-dir\") pod \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.383788 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kube-api-access\") pod \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\" (UID: \"2575703e-b4e1-488e-b6be-3d6d0e13e3a4\") " Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.383791 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2575703e-b4e1-488e-b6be-3d6d0e13e3a4" (UID: "2575703e-b4e1-488e-b6be-3d6d0e13e3a4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.384072 4924 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.389434 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2575703e-b4e1-488e-b6be-3d6d0e13e3a4" (UID: "2575703e-b4e1-488e-b6be-3d6d0e13e3a4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.485345 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2575703e-b4e1-488e-b6be-3d6d0e13e3a4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.912439 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"2575703e-b4e1-488e-b6be-3d6d0e13e3a4","Type":"ContainerDied","Data":"6df5f1c43e777c3a49709390317e85b136d53f6117b19de21141f2a12a162b69"} Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.912483 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df5f1c43e777c3a49709390317e85b136d53f6117b19de21141f2a12a162b69" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.912479 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.916441 4924 generic.go:334] "Generic (PLEG): container finished" podID="264e9bd7-cfe2-49a5-832f-4442018ab42e" containerID="0f64e38101daa39a2e3622ce0e386e76db9c6baa33a58b5a115ea1848cb2e9e7" exitCode=0 Dec 11 13:55:44 crc kubenswrapper[4924]: I1211 13:55:44.916488 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"264e9bd7-cfe2-49a5-832f-4442018ab42e","Type":"ContainerDied","Data":"0f64e38101daa39a2e3622ce0e386e76db9c6baa33a58b5a115ea1848cb2e9e7"} Dec 11 13:55:45 crc kubenswrapper[4924]: I1211 13:55:45.433246 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:55:45 crc kubenswrapper[4924]: I1211 13:55:45.433641 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:55:45 crc kubenswrapper[4924]: I1211 13:55:45.643624 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:45 crc kubenswrapper[4924]: I1211 13:55:45.648096 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4gbtq" Dec 11 13:55:45 crc kubenswrapper[4924]: I1211 13:55:45.699916 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9dvjv" Dec 11 13:55:47 crc kubenswrapper[4924]: I1211 13:55:47.727262 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:47 crc kubenswrapper[4924]: I1211 13:55:47.736496 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/39f08493-e794-4e97-bc69-8faa67a120b8-metrics-certs\") pod \"network-metrics-daemon-79mv2\" (UID: \"39f08493-e794-4e97-bc69-8faa67a120b8\") " pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:48 crc kubenswrapper[4924]: I1211 13:55:48.011271 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-79mv2" Dec 11 13:55:53 crc kubenswrapper[4924]: I1211 13:55:53.570662 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.838697 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.932597 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e9bd7-cfe2-49a5-832f-4442018ab42e-kubelet-dir\") pod \"264e9bd7-cfe2-49a5-832f-4442018ab42e\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.932745 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/264e9bd7-cfe2-49a5-832f-4442018ab42e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "264e9bd7-cfe2-49a5-832f-4442018ab42e" (UID: "264e9bd7-cfe2-49a5-832f-4442018ab42e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.932766 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e9bd7-cfe2-49a5-832f-4442018ab42e-kube-api-access\") pod \"264e9bd7-cfe2-49a5-832f-4442018ab42e\" (UID: \"264e9bd7-cfe2-49a5-832f-4442018ab42e\") " Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.933309 4924 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/264e9bd7-cfe2-49a5-832f-4442018ab42e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.938509 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264e9bd7-cfe2-49a5-832f-4442018ab42e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "264e9bd7-cfe2-49a5-832f-4442018ab42e" (UID: "264e9bd7-cfe2-49a5-832f-4442018ab42e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.981905 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"264e9bd7-cfe2-49a5-832f-4442018ab42e","Type":"ContainerDied","Data":"6d1a93cb053a920f69136333839e0a00d31a9648fb628469f2a7d0832f39060a"} Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.981948 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d1a93cb053a920f69136333839e0a00d31a9648fb628469f2a7d0832f39060a" Dec 11 13:55:54 crc kubenswrapper[4924]: I1211 13:55:54.981957 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:55:55 crc kubenswrapper[4924]: I1211 13:55:55.035231 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/264e9bd7-cfe2-49a5-832f-4442018ab42e-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:56:06 crc kubenswrapper[4924]: I1211 13:56:06.454350 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-v7bhg" Dec 11 13:56:13 crc kubenswrapper[4924]: I1211 13:56:13.868247 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.303269 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 13:56:15 crc kubenswrapper[4924]: E1211 13:56:15.303819 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2575703e-b4e1-488e-b6be-3d6d0e13e3a4" containerName="pruner" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.303833 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="2575703e-b4e1-488e-b6be-3d6d0e13e3a4" containerName="pruner" Dec 11 13:56:15 crc kubenswrapper[4924]: E1211 13:56:15.303846 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264e9bd7-cfe2-49a5-832f-4442018ab42e" containerName="pruner" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.303851 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="264e9bd7-cfe2-49a5-832f-4442018ab42e" containerName="pruner" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.303947 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="264e9bd7-cfe2-49a5-832f-4442018ab42e" containerName="pruner" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.303960 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="2575703e-b4e1-488e-b6be-3d6d0e13e3a4" containerName="pruner" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.304343 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.307628 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.307807 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.327287 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.362950 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.363019 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.433688 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.433745 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.464103 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.464176 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.464258 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.487394 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:15 crc kubenswrapper[4924]: I1211 13:56:15.632211 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.304715 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.306468 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.309314 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.369055 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.369118 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-var-lock\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.369166 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190c38aa-6c11-4635-a579-fc85fa4e367a-kube-api-access\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.470808 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190c38aa-6c11-4635-a579-fc85fa4e367a-kube-api-access\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.470914 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.470941 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-var-lock\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.470993 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-var-lock\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.471029 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.508804 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190c38aa-6c11-4635-a579-fc85fa4e367a-kube-api-access\") pod \"installer-9-crc\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:20 crc kubenswrapper[4924]: I1211 13:56:20.639865 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:56:32 crc kubenswrapper[4924]: E1211 13:56:32.921147 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 13:56:32 crc kubenswrapper[4924]: E1211 13:56:32.921887 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdt9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-kwx68_openshift-marketplace(87b9a0d8-b3c8-4076-bf25-a56dd799870c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:32 crc kubenswrapper[4924]: E1211 13:56:32.923904 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-kwx68" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" Dec 11 13:56:35 crc kubenswrapper[4924]: E1211 13:56:35.882625 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-kwx68" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" Dec 11 13:56:36 crc kubenswrapper[4924]: E1211 13:56:36.916336 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 13:56:36 crc kubenswrapper[4924]: E1211 13:56:36.916736 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rjg2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hmslv_openshift-marketplace(4b178dc2-db02-45b7-a589-b1e71d29c50e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:36 crc kubenswrapper[4924]: E1211 13:56:36.917839 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hmslv" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" Dec 11 13:56:37 crc kubenswrapper[4924]: E1211 13:56:37.676032 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 13:56:37 crc kubenswrapper[4924]: E1211 13:56:37.676174 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfvbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-86ngv_openshift-marketplace(592cde8b-91b2-49aa-a607-c84a02074d89): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:37 crc kubenswrapper[4924]: E1211 13:56:37.677263 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-86ngv" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" Dec 11 13:56:38 crc kubenswrapper[4924]: E1211 13:56:38.620092 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-86ngv" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" Dec 11 13:56:38 crc kubenswrapper[4924]: E1211 13:56:38.620066 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hmslv" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" Dec 11 13:56:45 crc kubenswrapper[4924]: I1211 13:56:45.433407 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:56:45 crc kubenswrapper[4924]: I1211 13:56:45.433859 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:56:45 crc kubenswrapper[4924]: I1211 13:56:45.433938 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 13:56:45 crc kubenswrapper[4924]: I1211 13:56:45.434910 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 13:56:45 crc kubenswrapper[4924]: I1211 13:56:45.435000 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543" gracePeriod=600 Dec 11 13:56:50 crc kubenswrapper[4924]: E1211 13:56:50.265068 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 13:56:50 crc kubenswrapper[4924]: E1211 13:56:50.265873 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2z6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tkv96_openshift-marketplace(2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:50 crc kubenswrapper[4924]: E1211 13:56:50.267145 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tkv96" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" Dec 11 13:56:50 crc kubenswrapper[4924]: I1211 13:56:50.281219 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543" exitCode=0 Dec 11 13:56:50 crc kubenswrapper[4924]: I1211 13:56:50.281412 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543"} Dec 11 13:56:54 crc kubenswrapper[4924]: E1211 13:56:54.733592 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tkv96" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" Dec 11 13:56:54 crc kubenswrapper[4924]: I1211 13:56:54.945585 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 13:56:55 crc kubenswrapper[4924]: I1211 13:56:55.008483 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-79mv2"] Dec 11 13:56:55 crc kubenswrapper[4924]: I1211 13:56:55.011233 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 13:56:55 crc kubenswrapper[4924]: I1211 13:56:55.309137 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-79mv2" event={"ID":"39f08493-e794-4e97-bc69-8faa67a120b8","Type":"ContainerStarted","Data":"c73f6ecc04d1136ae65ea321c6db944d323a83bef5176c65db54e00b8a7fc6d9"} Dec 11 13:56:55 crc kubenswrapper[4924]: I1211 13:56:55.310736 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a","Type":"ContainerStarted","Data":"a648cc717c40347593dbac07fb2763a09528a6400a19b3c16db65c3d479a2071"} Dec 11 13:56:55 crc kubenswrapper[4924]: I1211 13:56:55.311824 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"190c38aa-6c11-4635-a579-fc85fa4e367a","Type":"ContainerStarted","Data":"69181684611bb4d7c97d1dde333369b7931f691d507b3adca0058a0f2776ca2e"} Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.810743 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.811276 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxzpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bgbpp_openshift-marketplace(5644036b-4e4a-4ec8-b8e4-87db71012482): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.812561 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bgbpp" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.842555 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.842747 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txg9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-fvkwf_openshift-marketplace(f678bb40-07bb-4ae9-a317-4d06821f518a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.843936 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-fvkwf" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.860585 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.860837 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tb9pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6xvm8_openshift-marketplace(bceef104-5373-46a2-b7d9-5cc5782449f6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:56:59 crc kubenswrapper[4924]: E1211 13:56:59.862080 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6xvm8" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" Dec 11 13:57:00 crc kubenswrapper[4924]: I1211 13:57:00.355185 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-79mv2" event={"ID":"39f08493-e794-4e97-bc69-8faa67a120b8","Type":"ContainerStarted","Data":"7a1efc4d2abdacd628729a31599b215d22a265d685f90ca4dce2550a922c2d54"} Dec 11 13:57:02 crc kubenswrapper[4924]: I1211 13:57:02.370255 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a","Type":"ContainerStarted","Data":"92b0e37aa8a65cc776a70dc67ae171fd37d4ccc1cf0e178866603d02e5fc5d98"} Dec 11 13:57:02 crc kubenswrapper[4924]: I1211 13:57:02.372704 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"c79fce7fa0c1a857b32a9d68eaa5e8584a74fcf871adf90d33f6d45436b5aac8"} Dec 11 13:57:02 crc kubenswrapper[4924]: I1211 13:57:02.374088 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"190c38aa-6c11-4635-a579-fc85fa4e367a","Type":"ContainerStarted","Data":"4de59b17aae87a4d77037cb31e9292f8de76c49e202f75b6d378f64dc2b415b0"} Dec 11 13:57:02 crc kubenswrapper[4924]: E1211 13:57:02.792052 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 13:57:02 crc kubenswrapper[4924]: E1211 13:57:02.793712 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64mzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mfbdv_openshift-marketplace(e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:57:02 crc kubenswrapper[4924]: E1211 13:57:02.795382 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mfbdv" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" Dec 11 13:57:03 crc kubenswrapper[4924]: I1211 13:57:03.384554 4924 generic.go:334] "Generic (PLEG): container finished" podID="a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a" containerID="92b0e37aa8a65cc776a70dc67ae171fd37d4ccc1cf0e178866603d02e5fc5d98" exitCode=0 Dec 11 13:57:03 crc kubenswrapper[4924]: I1211 13:57:03.384629 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a","Type":"ContainerDied","Data":"92b0e37aa8a65cc776a70dc67ae171fd37d4ccc1cf0e178866603d02e5fc5d98"} Dec 11 13:57:03 crc kubenswrapper[4924]: I1211 13:57:03.453235 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=43.453214756 podStartE2EDuration="43.453214756s" podCreationTimestamp="2025-12-11 13:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:57:03.448726865 +0000 UTC m=+236.958207852" watchObservedRunningTime="2025-12-11 13:57:03.453214756 +0000 UTC m=+236.962695733" Dec 11 13:57:03 crc kubenswrapper[4924]: E1211 13:57:03.712738 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-fvkwf" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" Dec 11 13:57:03 crc kubenswrapper[4924]: E1211 13:57:03.712829 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6xvm8" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" Dec 11 13:57:03 crc kubenswrapper[4924]: E1211 13:57:03.712818 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mfbdv" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.392426 4924 generic.go:334] "Generic (PLEG): container finished" podID="592cde8b-91b2-49aa-a607-c84a02074d89" containerID="e06ce28f1ce7480268f35d5272e5cde6d82da0fe565a1d2bb8e5e18a201b0972" exitCode=0 Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.392458 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ngv" event={"ID":"592cde8b-91b2-49aa-a607-c84a02074d89","Type":"ContainerDied","Data":"e06ce28f1ce7480268f35d5272e5cde6d82da0fe565a1d2bb8e5e18a201b0972"} Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.396049 4924 generic.go:334] "Generic (PLEG): container finished" podID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerID="13f8084adc66c1f1d107eee356a86e5cf5717e6fe13f86d074d1133fd0ef4229" exitCode=0 Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.396121 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmslv" event={"ID":"4b178dc2-db02-45b7-a589-b1e71d29c50e","Type":"ContainerDied","Data":"13f8084adc66c1f1d107eee356a86e5cf5717e6fe13f86d074d1133fd0ef4229"} Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.398527 4924 generic.go:334] "Generic (PLEG): container finished" podID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerID="a1a603174e78f9d9d920b42039610b78f8a3a9b32e61bf2442a51d7982050dd1" exitCode=0 Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.398614 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwx68" event={"ID":"87b9a0d8-b3c8-4076-bf25-a56dd799870c","Type":"ContainerDied","Data":"a1a603174e78f9d9d920b42039610b78f8a3a9b32e61bf2442a51d7982050dd1"} Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.401608 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-79mv2" event={"ID":"39f08493-e794-4e97-bc69-8faa67a120b8","Type":"ContainerStarted","Data":"a4125b315e5a15b4ace130b9495e44834d938f94802a9b1d7c13698e648f82b3"} Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.613623 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.633636 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-79mv2" podStartSLOduration=219.63361569 podStartE2EDuration="3m39.63361569s" podCreationTimestamp="2025-12-11 13:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:57:04.462570434 +0000 UTC m=+237.972051411" watchObservedRunningTime="2025-12-11 13:57:04.63361569 +0000 UTC m=+238.143096677" Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.706660 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kubelet-dir\") pod \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.706749 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kube-api-access\") pod \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\" (UID: \"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a\") " Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.707248 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a" (UID: "a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.711629 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a" (UID: "a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.807759 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:04 crc kubenswrapper[4924]: I1211 13:57:04.807788 4924 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:05 crc kubenswrapper[4924]: I1211 13:57:05.409998 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a","Type":"ContainerDied","Data":"a648cc717c40347593dbac07fb2763a09528a6400a19b3c16db65c3d479a2071"} Dec 11 13:57:05 crc kubenswrapper[4924]: I1211 13:57:05.410346 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a648cc717c40347593dbac07fb2763a09528a6400a19b3c16db65c3d479a2071" Dec 11 13:57:05 crc kubenswrapper[4924]: I1211 13:57:05.410079 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:57:07 crc kubenswrapper[4924]: I1211 13:57:07.421465 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwx68" event={"ID":"87b9a0d8-b3c8-4076-bf25-a56dd799870c","Type":"ContainerStarted","Data":"b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11"} Dec 11 13:57:07 crc kubenswrapper[4924]: I1211 13:57:07.423524 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ngv" event={"ID":"592cde8b-91b2-49aa-a607-c84a02074d89","Type":"ContainerStarted","Data":"2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118"} Dec 11 13:57:07 crc kubenswrapper[4924]: I1211 13:57:07.425594 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmslv" event={"ID":"4b178dc2-db02-45b7-a589-b1e71d29c50e","Type":"ContainerStarted","Data":"9313af13a47cdbbe46f5fc2277a24f0534ef7f99dd1d145e259498e9baa3b95a"} Dec 11 13:57:07 crc kubenswrapper[4924]: I1211 13:57:07.453194 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kwx68" podStartSLOduration=3.005987855 podStartE2EDuration="1m34.453167528s" podCreationTimestamp="2025-12-11 13:55:33 +0000 UTC" firstStartedPulling="2025-12-11 13:55:35.412626661 +0000 UTC m=+148.922107638" lastFinishedPulling="2025-12-11 13:57:06.859806334 +0000 UTC m=+240.369287311" observedRunningTime="2025-12-11 13:57:07.451046197 +0000 UTC m=+240.960527184" watchObservedRunningTime="2025-12-11 13:57:07.453167528 +0000 UTC m=+240.962648505" Dec 11 13:57:07 crc kubenswrapper[4924]: I1211 13:57:07.494920 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-86ngv" podStartSLOduration=3.844109374 podStartE2EDuration="1m36.494898645s" podCreationTimestamp="2025-12-11 13:55:31 +0000 UTC" firstStartedPulling="2025-12-11 13:55:33.295907819 +0000 UTC m=+146.805388796" lastFinishedPulling="2025-12-11 13:57:05.94669709 +0000 UTC m=+239.456178067" observedRunningTime="2025-12-11 13:57:07.476681814 +0000 UTC m=+240.986162791" watchObservedRunningTime="2025-12-11 13:57:07.494898645 +0000 UTC m=+241.004379622" Dec 11 13:57:07 crc kubenswrapper[4924]: I1211 13:57:07.498652 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hmslv" podStartSLOduration=3.022838248 podStartE2EDuration="1m36.498638304s" podCreationTimestamp="2025-12-11 13:55:31 +0000 UTC" firstStartedPulling="2025-12-11 13:55:33.303162872 +0000 UTC m=+146.812643849" lastFinishedPulling="2025-12-11 13:57:06.778962908 +0000 UTC m=+240.288443905" observedRunningTime="2025-12-11 13:57:07.495262065 +0000 UTC m=+241.004743052" watchObservedRunningTime="2025-12-11 13:57:07.498638304 +0000 UTC m=+241.008119281" Dec 11 13:57:08 crc kubenswrapper[4924]: I1211 13:57:08.433827 4924 generic.go:334] "Generic (PLEG): container finished" podID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerID="294e22e7c144be9be4700c46ad769306e2f139a6ef0723c4e76ba59b95ea968d" exitCode=0 Dec 11 13:57:08 crc kubenswrapper[4924]: I1211 13:57:08.433932 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkv96" event={"ID":"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1","Type":"ContainerDied","Data":"294e22e7c144be9be4700c46ad769306e2f139a6ef0723c4e76ba59b95ea968d"} Dec 11 13:57:11 crc kubenswrapper[4924]: I1211 13:57:11.475118 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:57:11 crc kubenswrapper[4924]: I1211 13:57:11.475663 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:57:11 crc kubenswrapper[4924]: I1211 13:57:11.716680 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:57:11 crc kubenswrapper[4924]: I1211 13:57:11.866086 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:57:11 crc kubenswrapper[4924]: I1211 13:57:11.866147 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:57:11 crc kubenswrapper[4924]: I1211 13:57:11.909148 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:57:12 crc kubenswrapper[4924]: I1211 13:57:12.491940 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:57:12 crc kubenswrapper[4924]: I1211 13:57:12.494219 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 13:57:13 crc kubenswrapper[4924]: I1211 13:57:13.461283 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkv96" event={"ID":"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1","Type":"ContainerStarted","Data":"4cece45d59e8d48210e1c583dfe8073343ceae265067c5cbb1bf6f4666c7418c"} Dec 11 13:57:13 crc kubenswrapper[4924]: I1211 13:57:13.480675 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkv96" podStartSLOduration=4.119745152 podStartE2EDuration="1m42.480656484s" podCreationTimestamp="2025-12-11 13:55:31 +0000 UTC" firstStartedPulling="2025-12-11 13:55:34.369402324 +0000 UTC m=+147.878883301" lastFinishedPulling="2025-12-11 13:57:12.730313656 +0000 UTC m=+246.239794633" observedRunningTime="2025-12-11 13:57:13.478014507 +0000 UTC m=+246.987495484" watchObservedRunningTime="2025-12-11 13:57:13.480656484 +0000 UTC m=+246.990137461" Dec 11 13:57:13 crc kubenswrapper[4924]: I1211 13:57:13.879106 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:57:13 crc kubenswrapper[4924]: I1211 13:57:13.879190 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:57:13 crc kubenswrapper[4924]: I1211 13:57:13.931158 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:57:14 crc kubenswrapper[4924]: I1211 13:57:14.468717 4924 generic.go:334] "Generic (PLEG): container finished" podID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerID="8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14" exitCode=0 Dec 11 13:57:14 crc kubenswrapper[4924]: I1211 13:57:14.468795 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgbpp" event={"ID":"5644036b-4e4a-4ec8-b8e4-87db71012482","Type":"ContainerDied","Data":"8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14"} Dec 11 13:57:14 crc kubenswrapper[4924]: I1211 13:57:14.516727 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:57:14 crc kubenswrapper[4924]: I1211 13:57:14.590350 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86ngv"] Dec 11 13:57:14 crc kubenswrapper[4924]: I1211 13:57:14.590604 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-86ngv" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="registry-server" containerID="cri-o://2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118" gracePeriod=2 Dec 11 13:57:16 crc kubenswrapper[4924]: I1211 13:57:16.791937 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwx68"] Dec 11 13:57:16 crc kubenswrapper[4924]: I1211 13:57:16.792430 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kwx68" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="registry-server" containerID="cri-o://b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11" gracePeriod=2 Dec 11 13:57:18 crc kubenswrapper[4924]: I1211 13:57:18.491424 4924 generic.go:334] "Generic (PLEG): container finished" podID="592cde8b-91b2-49aa-a607-c84a02074d89" containerID="2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118" exitCode=0 Dec 11 13:57:18 crc kubenswrapper[4924]: I1211 13:57:18.491456 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ngv" event={"ID":"592cde8b-91b2-49aa-a607-c84a02074d89","Type":"ContainerDied","Data":"2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118"} Dec 11 13:57:21 crc kubenswrapper[4924]: I1211 13:57:21.512211 4924 generic.go:334] "Generic (PLEG): container finished" podID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerID="b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11" exitCode=0 Dec 11 13:57:21 crc kubenswrapper[4924]: I1211 13:57:21.512335 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwx68" event={"ID":"87b9a0d8-b3c8-4076-bf25-a56dd799870c","Type":"ContainerDied","Data":"b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11"} Dec 11 13:57:21 crc kubenswrapper[4924]: E1211 13:57:21.866389 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118 is running failed: container process not found" containerID="2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 13:57:21 crc kubenswrapper[4924]: E1211 13:57:21.868194 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118 is running failed: container process not found" containerID="2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 13:57:21 crc kubenswrapper[4924]: E1211 13:57:21.868709 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118 is running failed: container process not found" containerID="2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 13:57:21 crc kubenswrapper[4924]: E1211 13:57:21.868754 4924 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-86ngv" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="registry-server" Dec 11 13:57:21 crc kubenswrapper[4924]: I1211 13:57:21.940503 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:57:21 crc kubenswrapper[4924]: I1211 13:57:21.940571 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:57:21 crc kubenswrapper[4924]: I1211 13:57:21.992167 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:57:22 crc kubenswrapper[4924]: I1211 13:57:22.555983 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkv96" Dec 11 13:57:23 crc kubenswrapper[4924]: E1211 13:57:23.879476 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11 is running failed: container process not found" containerID="b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 13:57:23 crc kubenswrapper[4924]: E1211 13:57:23.880436 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11 is running failed: container process not found" containerID="b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 13:57:23 crc kubenswrapper[4924]: E1211 13:57:23.880756 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11 is running failed: container process not found" containerID="b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 13:57:23 crc kubenswrapper[4924]: E1211 13:57:23.880807 4924 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-kwx68" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="registry-server" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.142050 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.148156 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.285792 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdt9c\" (UniqueName: \"kubernetes.io/projected/87b9a0d8-b3c8-4076-bf25-a56dd799870c-kube-api-access-bdt9c\") pod \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.285854 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-utilities\") pod \"592cde8b-91b2-49aa-a607-c84a02074d89\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.285902 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-utilities\") pod \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.285918 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-catalog-content\") pod \"592cde8b-91b2-49aa-a607-c84a02074d89\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.285948 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-catalog-content\") pod \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\" (UID: \"87b9a0d8-b3c8-4076-bf25-a56dd799870c\") " Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.285973 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfvbc\" (UniqueName: \"kubernetes.io/projected/592cde8b-91b2-49aa-a607-c84a02074d89-kube-api-access-dfvbc\") pod \"592cde8b-91b2-49aa-a607-c84a02074d89\" (UID: \"592cde8b-91b2-49aa-a607-c84a02074d89\") " Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.287429 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-utilities" (OuterVolumeSpecName: "utilities") pod "87b9a0d8-b3c8-4076-bf25-a56dd799870c" (UID: "87b9a0d8-b3c8-4076-bf25-a56dd799870c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.287698 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-utilities" (OuterVolumeSpecName: "utilities") pod "592cde8b-91b2-49aa-a607-c84a02074d89" (UID: "592cde8b-91b2-49aa-a607-c84a02074d89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.295297 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b9a0d8-b3c8-4076-bf25-a56dd799870c-kube-api-access-bdt9c" (OuterVolumeSpecName: "kube-api-access-bdt9c") pod "87b9a0d8-b3c8-4076-bf25-a56dd799870c" (UID: "87b9a0d8-b3c8-4076-bf25-a56dd799870c"). InnerVolumeSpecName "kube-api-access-bdt9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.297228 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592cde8b-91b2-49aa-a607-c84a02074d89-kube-api-access-dfvbc" (OuterVolumeSpecName: "kube-api-access-dfvbc") pod "592cde8b-91b2-49aa-a607-c84a02074d89" (UID: "592cde8b-91b2-49aa-a607-c84a02074d89"). InnerVolumeSpecName "kube-api-access-dfvbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.310470 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b9a0d8-b3c8-4076-bf25-a56dd799870c" (UID: "87b9a0d8-b3c8-4076-bf25-a56dd799870c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.339718 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "592cde8b-91b2-49aa-a607-c84a02074d89" (UID: "592cde8b-91b2-49aa-a607-c84a02074d89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.387392 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdt9c\" (UniqueName: \"kubernetes.io/projected/87b9a0d8-b3c8-4076-bf25-a56dd799870c-kube-api-access-bdt9c\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.387443 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.387459 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592cde8b-91b2-49aa-a607-c84a02074d89-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.387470 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.387481 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b9a0d8-b3c8-4076-bf25-a56dd799870c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.387492 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfvbc\" (UniqueName: \"kubernetes.io/projected/592cde8b-91b2-49aa-a607-c84a02074d89-kube-api-access-dfvbc\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.534436 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-86ngv" event={"ID":"592cde8b-91b2-49aa-a607-c84a02074d89","Type":"ContainerDied","Data":"4feb793a56492a3a497ac1b733af1c90978018554a123d3c565bdc43dfbbed29"} Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.534488 4924 scope.go:117] "RemoveContainer" containerID="2714fc977a45a05eb347d51646bf6ab7c1c5f62882eee0dddc0a4482dfd1f118" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.534542 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-86ngv" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.537032 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kwx68" event={"ID":"87b9a0d8-b3c8-4076-bf25-a56dd799870c","Type":"ContainerDied","Data":"7021507234a8d73ca948eb697d783ee6fdda83cdb4710e6790698e93d8a03b1f"} Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.537097 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kwx68" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.570589 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-86ngv"] Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.575740 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-86ngv"] Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.584198 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwx68"] Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.590945 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kwx68"] Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.671537 4924 scope.go:117] "RemoveContainer" containerID="e06ce28f1ce7480268f35d5272e5cde6d82da0fe565a1d2bb8e5e18a201b0972" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.698084 4924 scope.go:117] "RemoveContainer" containerID="03cddd9f47eea030acac372b51ffcd51cf9321e5f3bf0301071cc61bf0a14ed2" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.727378 4924 scope.go:117] "RemoveContainer" containerID="b6983a452d6c5eb3949cd32edef81c06636ff99d02de468140e4d5f713e5af11" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.750667 4924 scope.go:117] "RemoveContainer" containerID="a1a603174e78f9d9d920b42039610b78f8a3a9b32e61bf2442a51d7982050dd1" Dec 11 13:57:25 crc kubenswrapper[4924]: I1211 13:57:25.792557 4924 scope.go:117] "RemoveContainer" containerID="5996e1c3421781e0e5a078296bea88ef0479cb8f88370ad112319523d0ccf7c0" Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.543366 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerStarted","Data":"d46eb7bc44782fb847fbae790ef167baaeb2666c326c0a4ba55f8fa1fe808ee1"} Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.545358 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerStarted","Data":"3b276879bb1ba21811a9b6d1e41a5194942b7ac347bc2441f7b55a5d55e82d6f"} Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.547151 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgbpp" event={"ID":"5644036b-4e4a-4ec8-b8e4-87db71012482","Type":"ContainerStarted","Data":"8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843"} Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.551046 4924 generic.go:334] "Generic (PLEG): container finished" podID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerID="b5316e8315be2ccd5c6aef7c9c479c1523bc559dc6a19cacd1511bd3e1e8931e" exitCode=0 Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.551124 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvkwf" event={"ID":"f678bb40-07bb-4ae9-a317-4d06821f518a","Type":"ContainerDied","Data":"b5316e8315be2ccd5c6aef7c9c479c1523bc559dc6a19cacd1511bd3e1e8931e"} Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.611133 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bgbpp" podStartSLOduration=5.67870605 podStartE2EDuration="1m55.611115226s" podCreationTimestamp="2025-12-11 13:55:31 +0000 UTC" firstStartedPulling="2025-12-11 13:55:35.574194017 +0000 UTC m=+149.083674994" lastFinishedPulling="2025-12-11 13:57:25.506603193 +0000 UTC m=+259.016084170" observedRunningTime="2025-12-11 13:57:26.608776137 +0000 UTC m=+260.118257124" watchObservedRunningTime="2025-12-11 13:57:26.611115226 +0000 UTC m=+260.120596203" Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.788886 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" path="/var/lib/kubelet/pods/592cde8b-91b2-49aa-a607-c84a02074d89/volumes" Dec 11 13:57:26 crc kubenswrapper[4924]: I1211 13:57:26.789568 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" path="/var/lib/kubelet/pods/87b9a0d8-b3c8-4076-bf25-a56dd799870c/volumes" Dec 11 13:57:27 crc kubenswrapper[4924]: I1211 13:57:27.564569 4924 generic.go:334] "Generic (PLEG): container finished" podID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerID="3b276879bb1ba21811a9b6d1e41a5194942b7ac347bc2441f7b55a5d55e82d6f" exitCode=0 Dec 11 13:57:27 crc kubenswrapper[4924]: I1211 13:57:27.564642 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerDied","Data":"3b276879bb1ba21811a9b6d1e41a5194942b7ac347bc2441f7b55a5d55e82d6f"} Dec 11 13:57:27 crc kubenswrapper[4924]: I1211 13:57:27.569939 4924 generic.go:334] "Generic (PLEG): container finished" podID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerID="d46eb7bc44782fb847fbae790ef167baaeb2666c326c0a4ba55f8fa1fe808ee1" exitCode=0 Dec 11 13:57:27 crc kubenswrapper[4924]: I1211 13:57:27.569988 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerDied","Data":"d46eb7bc44782fb847fbae790ef167baaeb2666c326c0a4ba55f8fa1fe808ee1"} Dec 11 13:57:29 crc kubenswrapper[4924]: I1211 13:57:29.584193 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvkwf" event={"ID":"f678bb40-07bb-4ae9-a317-4d06821f518a","Type":"ContainerStarted","Data":"6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4"} Dec 11 13:57:29 crc kubenswrapper[4924]: I1211 13:57:29.604413 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fvkwf" podStartSLOduration=3.406777581 podStartE2EDuration="1m56.604393357s" podCreationTimestamp="2025-12-11 13:55:33 +0000 UTC" firstStartedPulling="2025-12-11 13:55:35.505640667 +0000 UTC m=+149.015121644" lastFinishedPulling="2025-12-11 13:57:28.703256443 +0000 UTC m=+262.212737420" observedRunningTime="2025-12-11 13:57:29.601989486 +0000 UTC m=+263.111470463" watchObservedRunningTime="2025-12-11 13:57:29.604393357 +0000 UTC m=+263.113874334" Dec 11 13:57:32 crc kubenswrapper[4924]: I1211 13:57:32.340123 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:57:32 crc kubenswrapper[4924]: I1211 13:57:32.340771 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:57:32 crc kubenswrapper[4924]: I1211 13:57:32.394725 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:57:32 crc kubenswrapper[4924]: I1211 13:57:32.601015 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerStarted","Data":"227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2"} Dec 11 13:57:32 crc kubenswrapper[4924]: I1211 13:57:32.642814 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:57:33 crc kubenswrapper[4924]: I1211 13:57:33.451871 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:57:33 crc kubenswrapper[4924]: I1211 13:57:33.452233 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:57:33 crc kubenswrapper[4924]: I1211 13:57:33.525304 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:57:33 crc kubenswrapper[4924]: I1211 13:57:33.789042 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgbpp"] Dec 11 13:57:34 crc kubenswrapper[4924]: I1211 13:57:34.610610 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bgbpp" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="registry-server" containerID="cri-o://8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843" gracePeriod=2 Dec 11 13:57:34 crc kubenswrapper[4924]: I1211 13:57:34.639357 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xvm8" podStartSLOduration=7.09990681 podStartE2EDuration="2m0.639318787s" podCreationTimestamp="2025-12-11 13:55:34 +0000 UTC" firstStartedPulling="2025-12-11 13:55:36.642094835 +0000 UTC m=+150.151575822" lastFinishedPulling="2025-12-11 13:57:30.181506822 +0000 UTC m=+263.690987799" observedRunningTime="2025-12-11 13:57:34.634776294 +0000 UTC m=+268.144257271" watchObservedRunningTime="2025-12-11 13:57:34.639318787 +0000 UTC m=+268.148799764" Dec 11 13:57:34 crc kubenswrapper[4924]: I1211 13:57:34.832430 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:57:34 crc kubenswrapper[4924]: I1211 13:57:34.832729 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:57:34 crc kubenswrapper[4924]: I1211 13:57:34.958149 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.114655 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-utilities\") pod \"5644036b-4e4a-4ec8-b8e4-87db71012482\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.114797 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxzpg\" (UniqueName: \"kubernetes.io/projected/5644036b-4e4a-4ec8-b8e4-87db71012482-kube-api-access-jxzpg\") pod \"5644036b-4e4a-4ec8-b8e4-87db71012482\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.114816 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-catalog-content\") pod \"5644036b-4e4a-4ec8-b8e4-87db71012482\" (UID: \"5644036b-4e4a-4ec8-b8e4-87db71012482\") " Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.115547 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-utilities" (OuterVolumeSpecName: "utilities") pod "5644036b-4e4a-4ec8-b8e4-87db71012482" (UID: "5644036b-4e4a-4ec8-b8e4-87db71012482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.120510 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5644036b-4e4a-4ec8-b8e4-87db71012482-kube-api-access-jxzpg" (OuterVolumeSpecName: "kube-api-access-jxzpg") pod "5644036b-4e4a-4ec8-b8e4-87db71012482" (UID: "5644036b-4e4a-4ec8-b8e4-87db71012482"). InnerVolumeSpecName "kube-api-access-jxzpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.167597 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5644036b-4e4a-4ec8-b8e4-87db71012482" (UID: "5644036b-4e4a-4ec8-b8e4-87db71012482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.216603 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxzpg\" (UniqueName: \"kubernetes.io/projected/5644036b-4e4a-4ec8-b8e4-87db71012482-kube-api-access-jxzpg\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.216646 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.216658 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5644036b-4e4a-4ec8-b8e4-87db71012482-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.617021 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerStarted","Data":"345db10d571dd29944e141c350767597db30d5854144839dee2c77ce7856f8e6"} Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.620407 4924 generic.go:334] "Generic (PLEG): container finished" podID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerID="8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843" exitCode=0 Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.620483 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bgbpp" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.620528 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgbpp" event={"ID":"5644036b-4e4a-4ec8-b8e4-87db71012482","Type":"ContainerDied","Data":"8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843"} Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.620562 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bgbpp" event={"ID":"5644036b-4e4a-4ec8-b8e4-87db71012482","Type":"ContainerDied","Data":"6d7ac09507384ceef977ba9168c6449a7b36a21120b2350b2744ac2d8d41203a"} Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.620581 4924 scope.go:117] "RemoveContainer" containerID="8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.645424 4924 scope.go:117] "RemoveContainer" containerID="8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.645639 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mfbdv" podStartSLOduration=3.730476448 podStartE2EDuration="2m1.645618272s" podCreationTimestamp="2025-12-11 13:55:34 +0000 UTC" firstStartedPulling="2025-12-11 13:55:36.662117946 +0000 UTC m=+150.171598923" lastFinishedPulling="2025-12-11 13:57:34.57725977 +0000 UTC m=+268.086740747" observedRunningTime="2025-12-11 13:57:35.64216419 +0000 UTC m=+269.151645167" watchObservedRunningTime="2025-12-11 13:57:35.645618272 +0000 UTC m=+269.155099249" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.661007 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bgbpp"] Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.663565 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bgbpp"] Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.677104 4924 scope.go:117] "RemoveContainer" containerID="a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.693151 4924 scope.go:117] "RemoveContainer" containerID="8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843" Dec 11 13:57:35 crc kubenswrapper[4924]: E1211 13:57:35.694748 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843\": container with ID starting with 8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843 not found: ID does not exist" containerID="8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.694814 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843"} err="failed to get container status \"8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843\": rpc error: code = NotFound desc = could not find container \"8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843\": container with ID starting with 8cfdbf4781986dded4a43a39938fb50756f88b98bc1d3d3451a642dfbe9dd843 not found: ID does not exist" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.694847 4924 scope.go:117] "RemoveContainer" containerID="8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14" Dec 11 13:57:35 crc kubenswrapper[4924]: E1211 13:57:35.695230 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14\": container with ID starting with 8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14 not found: ID does not exist" containerID="8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.695553 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14"} err="failed to get container status \"8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14\": rpc error: code = NotFound desc = could not find container \"8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14\": container with ID starting with 8db0d309a725094879811e1b0daf417ef4f5358b81778b93ffb7ba5a90704e14 not found: ID does not exist" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.695678 4924 scope.go:117] "RemoveContainer" containerID="a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1" Dec 11 13:57:35 crc kubenswrapper[4924]: E1211 13:57:35.696053 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1\": container with ID starting with a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1 not found: ID does not exist" containerID="a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.696086 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1"} err="failed to get container status \"a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1\": rpc error: code = NotFound desc = could not find container \"a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1\": container with ID starting with a1b1074fef5a42789cb5086f97c627c579e4936b8cd076646a8083f8813e8ee1 not found: ID does not exist" Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.731177 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-p7n6k"] Dec 11 13:57:35 crc kubenswrapper[4924]: I1211 13:57:35.868470 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xvm8" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="registry-server" probeResult="failure" output=< Dec 11 13:57:35 crc kubenswrapper[4924]: timeout: failed to connect service ":50051" within 1s Dec 11 13:57:35 crc kubenswrapper[4924]: > Dec 11 13:57:36 crc kubenswrapper[4924]: I1211 13:57:36.789594 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" path="/var/lib/kubelet/pods/5644036b-4e4a-4ec8-b8e4-87db71012482/volumes" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.546904 4924 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548371 4924 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548772 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a" containerName="pruner" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548796 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a" containerName="pruner" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548812 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="extract-content" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548822 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="extract-content" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548835 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="extract-utilities" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548841 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="extract-utilities" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548848 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="extract-utilities" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548854 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="extract-utilities" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548864 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548869 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548878 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548884 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548896 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="extract-content" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548904 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="extract-content" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548916 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="extract-utilities" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548923 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="extract-utilities" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548935 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548944 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.548955 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="extract-content" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.548961 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="extract-content" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.549080 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="592cde8b-91b2-49aa-a607-c84a02074d89" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.549089 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b9a0d8-b3c8-4076-bf25-a56dd799870c" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.549102 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e9a9ca-9d51-4b2e-b244-9b2b3987a46a" containerName="pruner" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.549112 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="5644036b-4e4a-4ec8-b8e4-87db71012482" containerName="registry-server" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.549645 4924 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.549822 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.550041 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255" gracePeriod=15 Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.550073 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" gracePeriod=15 Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.550109 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" gracePeriod=15 Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.550170 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" gracePeriod=15 Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.550355 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" gracePeriod=15 Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.551043 4924 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552311 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552352 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552371 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552379 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552388 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552396 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552413 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552420 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552433 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552440 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552453 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552461 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552588 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552601 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552612 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552626 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552640 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.552759 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552768 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.552911 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.591762 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678334 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678729 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678765 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678786 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678806 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678913 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678942 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.678969 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.779596 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.779875 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.779954 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.779959 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.779704 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780126 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780175 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780131 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780218 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780191 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780280 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780366 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780383 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780417 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780394 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.780446 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:39 crc kubenswrapper[4924]: E1211 13:57:39.836602 4924 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod190c38aa_6c11_4635_a579_fc85fa4e367a.slice/crio-4de59b17aae87a4d77037cb31e9292f8de76c49e202f75b6d378f64dc2b415b0.scope\": RecentStats: unable to find data in memory cache]" Dec 11 13:57:39 crc kubenswrapper[4924]: I1211 13:57:39.891477 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:57:40 crc kubenswrapper[4924]: I1211 13:57:40.325385 4924 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 13:57:40 crc kubenswrapper[4924]: I1211 13:57:40.325739 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 13:57:40 crc kubenswrapper[4924]: I1211 13:57:40.646351 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8f36697d430619a7c2c4d68294ee1805a5a6741210680ab8c84e7abb6c4930f7"} Dec 11 13:57:43 crc kubenswrapper[4924]: I1211 13:57:43.493233 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 13:57:44 crc kubenswrapper[4924]: I1211 13:57:44.874027 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:57:44 crc kubenswrapper[4924]: I1211 13:57:44.875923 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:44 crc kubenswrapper[4924]: I1211 13:57:44.924784 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 13:57:44 crc kubenswrapper[4924]: I1211 13:57:44.925343 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:45 crc kubenswrapper[4924]: I1211 13:57:45.222922 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:57:45 crc kubenswrapper[4924]: I1211 13:57:45.222976 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:57:45 crc kubenswrapper[4924]: I1211 13:57:45.262376 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:57:45 crc kubenswrapper[4924]: I1211 13:57:45.262885 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:45 crc kubenswrapper[4924]: I1211 13:57:45.263138 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:46 crc kubenswrapper[4924]: I1211 13:57:46.785276 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:46 crc kubenswrapper[4924]: I1211 13:57:46.786781 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:47 crc kubenswrapper[4924]: I1211 13:57:47.398000 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 11 13:57:47 crc kubenswrapper[4924]: I1211 13:57:47.398862 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:57:48 crc kubenswrapper[4924]: E1211 13:57:48.435988 4924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18802dda0ab946e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:57:48.435269348 +0000 UTC m=+281.944750325,LastTimestamp:2025-12-11 13:57:48.435269348 +0000 UTC m=+281.944750325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:50.872994 4924 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:50.873684 4924 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:50.874084 4924 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:50.877161 4924 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:50.877586 4924 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:50.877619 4924 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:50.878040 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="200ms" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:51.079221 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="400ms" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.133122 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:51.480681 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="800ms" Dec 11 13:57:51 crc kubenswrapper[4924]: E1211 13:57:51.975632 4924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18802dda0ab946e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:57:48.435269348 +0000 UTC m=+281.944750325,LastTimestamp:2025-12-11 13:57:48.435269348 +0000 UTC m=+281.944750325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.995158 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.996761 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.997572 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.998036 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.998202 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:51 crc kubenswrapper[4924]: I1211 13:57:51.998489 4924 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.038720 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.038811 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.038906 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.039212 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.039229 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.039254 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.140379 4924 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.140433 4924 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.140445 4924 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.245129 4924 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" exitCode=-1 Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.245165 4924 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" exitCode=0 Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.245178 4924 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" exitCode=0 Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.245186 4924 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" exitCode=2 Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.245271 4924 scope.go:117] "RemoveContainer" containerID="1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.260212 4924 scope.go:117] "RemoveContainer" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.278631 4924 scope.go:117] "RemoveContainer" containerID="1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.281623 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="1.6s" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.293985 4924 scope.go:117] "RemoveContainer" containerID="40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.307342 4924 scope.go:117] "RemoveContainer" containerID="62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.329354 4924 scope.go:117] "RemoveContainer" containerID="7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.347759 4924 scope.go:117] "RemoveContainer" containerID="777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.371287 4924 scope.go:117] "RemoveContainer" containerID="1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.371797 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": container with ID starting with 1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4 not found: ID does not exist" containerID="1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.371847 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4"} err="failed to get container status \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": rpc error: code = NotFound desc = could not find container \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": container with ID starting with 1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.371880 4924 scope.go:117] "RemoveContainer" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.372292 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": container with ID starting with 29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6 not found: ID does not exist" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.372345 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6"} err="failed to get container status \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": rpc error: code = NotFound desc = could not find container \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": container with ID starting with 29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.372378 4924 scope.go:117] "RemoveContainer" containerID="1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.372815 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": container with ID starting with 1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa not found: ID does not exist" containerID="1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.372837 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa"} err="failed to get container status \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": rpc error: code = NotFound desc = could not find container \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": container with ID starting with 1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.372849 4924 scope.go:117] "RemoveContainer" containerID="40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.373185 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": container with ID starting with 40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa not found: ID does not exist" containerID="40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.373231 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa"} err="failed to get container status \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": rpc error: code = NotFound desc = could not find container \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": container with ID starting with 40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.373264 4924 scope.go:117] "RemoveContainer" containerID="62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.373550 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": container with ID starting with 62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a not found: ID does not exist" containerID="62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.373583 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a"} err="failed to get container status \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": rpc error: code = NotFound desc = could not find container \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": container with ID starting with 62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.373600 4924 scope.go:117] "RemoveContainer" containerID="7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.373968 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": container with ID starting with 7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255 not found: ID does not exist" containerID="7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.374036 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255"} err="failed to get container status \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": rpc error: code = NotFound desc = could not find container \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": container with ID starting with 7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.374074 4924 scope.go:117] "RemoveContainer" containerID="777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e" Dec 11 13:57:52 crc kubenswrapper[4924]: E1211 13:57:52.374413 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": container with ID starting with 777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e not found: ID does not exist" containerID="777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.374461 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e"} err="failed to get container status \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": rpc error: code = NotFound desc = could not find container \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": container with ID starting with 777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.374479 4924 scope.go:117] "RemoveContainer" containerID="1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.374822 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4"} err="failed to get container status \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": rpc error: code = NotFound desc = could not find container \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": container with ID starting with 1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.374875 4924 scope.go:117] "RemoveContainer" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375164 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6"} err="failed to get container status \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": rpc error: code = NotFound desc = could not find container \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": container with ID starting with 29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375191 4924 scope.go:117] "RemoveContainer" containerID="1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375421 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa"} err="failed to get container status \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": rpc error: code = NotFound desc = could not find container \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": container with ID starting with 1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375449 4924 scope.go:117] "RemoveContainer" containerID="40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375673 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa"} err="failed to get container status \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": rpc error: code = NotFound desc = could not find container \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": container with ID starting with 40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375698 4924 scope.go:117] "RemoveContainer" containerID="62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375896 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a"} err="failed to get container status \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": rpc error: code = NotFound desc = could not find container \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": container with ID starting with 62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.375918 4924 scope.go:117] "RemoveContainer" containerID="7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.376168 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255"} err="failed to get container status \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": rpc error: code = NotFound desc = could not find container \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": container with ID starting with 7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.376205 4924 scope.go:117] "RemoveContainer" containerID="777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.376456 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e"} err="failed to get container status \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": rpc error: code = NotFound desc = could not find container \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": container with ID starting with 777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.376477 4924 scope.go:117] "RemoveContainer" containerID="1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.376798 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4"} err="failed to get container status \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": rpc error: code = NotFound desc = could not find container \"1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4\": container with ID starting with 1b86022b6b32ed4b4183c38d48d83968bbc097209e8b1a0be1a6cd7667891ff4 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.376858 4924 scope.go:117] "RemoveContainer" containerID="29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.377165 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6"} err="failed to get container status \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": rpc error: code = NotFound desc = could not find container \"29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6\": container with ID starting with 29a742a918bb3ecc7a0808e7fb141031e3363d6eb858c31da649378a2b2e95e6 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.377195 4924 scope.go:117] "RemoveContainer" containerID="1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.377412 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa"} err="failed to get container status \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": rpc error: code = NotFound desc = could not find container \"1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa\": container with ID starting with 1b62029a25b766f88fc145d49a466f81203b28053bec79b7669435f0f5e99aaa not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.377437 4924 scope.go:117] "RemoveContainer" containerID="40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378067 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa"} err="failed to get container status \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": rpc error: code = NotFound desc = could not find container \"40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa\": container with ID starting with 40f5297719d79a90006521791d14bea270974d99ff29cebd4f34733f6519fdaa not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378091 4924 scope.go:117] "RemoveContainer" containerID="62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378302 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a"} err="failed to get container status \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": rpc error: code = NotFound desc = could not find container \"62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a\": container with ID starting with 62268c569259894a8eba2eafe7c3dfc818c20e65a84cf0fc9a0ecf136958ae0a not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378344 4924 scope.go:117] "RemoveContainer" containerID="7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378758 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255"} err="failed to get container status \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": rpc error: code = NotFound desc = could not find container \"7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255\": container with ID starting with 7c3bf8125418a79c460646cf63f969b9c4b35b16d8562bfa726f95d0c5f69255 not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378781 4924 scope.go:117] "RemoveContainer" containerID="777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.378957 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e"} err="failed to get container status \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": rpc error: code = NotFound desc = could not find container \"777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e\": container with ID starting with 777c5eea91c08fcfe5b25e51bd6344fdf41f29868cdf112f27cc95a2219fd09e not found: ID does not exist" Dec 11 13:57:52 crc kubenswrapper[4924]: I1211 13:57:52.790824 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.257379 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b"} Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.258187 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.258615 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.258972 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.259027 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.260433 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.260858 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.261234 4924 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.261684 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.262146 4924 generic.go:334] "Generic (PLEG): container finished" podID="190c38aa-6c11-4635-a579-fc85fa4e367a" containerID="4de59b17aae87a4d77037cb31e9292f8de76c49e202f75b6d378f64dc2b415b0" exitCode=0 Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.262223 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"190c38aa-6c11-4635-a579-fc85fa4e367a","Type":"ContainerDied","Data":"4de59b17aae87a4d77037cb31e9292f8de76c49e202f75b6d378f64dc2b415b0"} Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.262222 4924 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.262472 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.262729 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.262997 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.263528 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.263885 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.264231 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.264590 4924 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.264963 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.308743 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.309121 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.309494 4924 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.309695 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.309921 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: I1211 13:57:53.310154 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:53 crc kubenswrapper[4924]: E1211 13:57:53.882668 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="3.2s" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.485773 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.486314 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.486614 4924 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.486882 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.487101 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.487357 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.572646 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-var-lock\") pod \"190c38aa-6c11-4635-a579-fc85fa4e367a\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.572679 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-kubelet-dir\") pod \"190c38aa-6c11-4635-a579-fc85fa4e367a\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.572743 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-var-lock" (OuterVolumeSpecName: "var-lock") pod "190c38aa-6c11-4635-a579-fc85fa4e367a" (UID: "190c38aa-6c11-4635-a579-fc85fa4e367a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.572788 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "190c38aa-6c11-4635-a579-fc85fa4e367a" (UID: "190c38aa-6c11-4635-a579-fc85fa4e367a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.572769 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190c38aa-6c11-4635-a579-fc85fa4e367a-kube-api-access\") pod \"190c38aa-6c11-4635-a579-fc85fa4e367a\" (UID: \"190c38aa-6c11-4635-a579-fc85fa4e367a\") " Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.573146 4924 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.573164 4924 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/190c38aa-6c11-4635-a579-fc85fa4e367a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.577113 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190c38aa-6c11-4635-a579-fc85fa4e367a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "190c38aa-6c11-4635-a579-fc85fa4e367a" (UID: "190c38aa-6c11-4635-a579-fc85fa4e367a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:57:54 crc kubenswrapper[4924]: I1211 13:57:54.674756 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/190c38aa-6c11-4635-a579-fc85fa4e367a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.282558 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"190c38aa-6c11-4635-a579-fc85fa4e367a","Type":"ContainerDied","Data":"69181684611bb4d7c97d1dde333369b7931f691d507b3adca0058a0f2776ca2e"} Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.282854 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69181684611bb4d7c97d1dde333369b7931f691d507b3adca0058a0f2776ca2e" Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.282680 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.287034 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.287380 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.287694 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:55 crc kubenswrapper[4924]: I1211 13:57:55.288000 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: I1211 13:57:56.814190 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: I1211 13:57:56.814945 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: I1211 13:57:56.815399 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: I1211 13:57:56.815658 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: E1211 13:57:56.818449 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:57:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:57:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:57:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:57:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:34f522750c260aee8d7d3d8c16bba58727f5dfb964b4aecc8b09e3e6f7056f12\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:9acec1ab208005d77c0ac2722e15bf8620aff3b5c4ab7910d45b05a66d2bb912\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1628955991},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:280527b88ffb9a3722a8575a09953fdf0ffded772ca59c8ebce3a4cd2c62d7cd\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:9c58f6c7c4b4317092e82d86d8cc80efd47c4982299f9bbdb4e8444d4d3df9ca\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234628436},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:04ccbfd75344536604a32b67f586e94cdcd8de3f756189e2f5b8e26a203d0423\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d1fb80806f091a0f5bb1f602d8de38f67c4a42b5076e43f559fa77b8ca880d37\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202228571},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:be25e28aabd5a6e06b4df55e58fa4be426c96c57e3387969e0070e6058149d04\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e6f1bca5d60a93ec9f9bd8ae305cd4ded3f62b2a51bbfdf59e056ea57c0c5b9f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1154573130},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: E1211 13:57:56.818864 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: E1211 13:57:56.819077 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: E1211 13:57:56.819262 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: E1211 13:57:56.819743 4924 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:56 crc kubenswrapper[4924]: E1211 13:57:56.819764 4924 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:57:57 crc kubenswrapper[4924]: E1211 13:57:57.083214 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="6.4s" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.840507 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.840562 4924 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231" exitCode=1 Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.840590 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231"} Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.841002 4924 scope.go:117] "RemoveContainer" containerID="639d7fd515a0e295dc49454efcbd41268157090e8a0380fcb1847346c99d3231" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.841597 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.842118 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.842513 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.842732 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:57:59 crc kubenswrapper[4924]: I1211 13:57:59.842885 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.755891 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" containerName="oauth-openshift" containerID="cri-o://c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6" gracePeriod=15 Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.847960 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.848023 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"262a75a0ee87216d7af9087c8bb9fd802a28b5ea14bc013915c396520f754bac"} Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.849123 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.849517 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.849862 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.850286 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:00 crc kubenswrapper[4924]: I1211 13:58:00.850564 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.142603 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.143481 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.143691 4924 status_manager.go:851] "Failed to get status for pod" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-p7n6k\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.143977 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.144275 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.144441 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.144592 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.213492 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274074 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-error\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274145 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-idp-0-file-data\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274181 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82d94e21-1c16-4233-a399-e34c89240e6d-audit-dir\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274208 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-service-ca\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274252 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-ocp-branding-template\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274307 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-router-certs\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274394 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-login\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274450 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-audit-policies\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274474 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-serving-cert\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274491 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d94e21-1c16-4233-a399-e34c89240e6d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274520 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c28d8\" (UniqueName: \"kubernetes.io/projected/82d94e21-1c16-4233-a399-e34c89240e6d-kube-api-access-c28d8\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274621 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-trusted-ca-bundle\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274667 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-session\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274709 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-cliconfig\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.274765 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-provider-selection\") pod \"82d94e21-1c16-4233-a399-e34c89240e6d\" (UID: \"82d94e21-1c16-4233-a399-e34c89240e6d\") " Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.275229 4924 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/82d94e21-1c16-4233-a399-e34c89240e6d-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.275572 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.276073 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.276263 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.276529 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.280146 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.280967 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d94e21-1c16-4233-a399-e34c89240e6d-kube-api-access-c28d8" (OuterVolumeSpecName: "kube-api-access-c28d8") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "kube-api-access-c28d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.281317 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.281668 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.281928 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.282119 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.282363 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.282767 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.282929 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "82d94e21-1c16-4233-a399-e34c89240e6d" (UID: "82d94e21-1c16-4233-a399-e34c89240e6d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376843 4924 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376889 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376903 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c28d8\" (UniqueName: \"kubernetes.io/projected/82d94e21-1c16-4233-a399-e34c89240e6d-kube-api-access-c28d8\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376914 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376925 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376934 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376943 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376954 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376964 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376974 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376984 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.376993 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.377001 4924 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/82d94e21-1c16-4233-a399-e34c89240e6d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.855550 4924 generic.go:334] "Generic (PLEG): container finished" podID="82d94e21-1c16-4233-a399-e34c89240e6d" containerID="c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6" exitCode=0 Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.855621 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" event={"ID":"82d94e21-1c16-4233-a399-e34c89240e6d","Type":"ContainerDied","Data":"c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6"} Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.855966 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" event={"ID":"82d94e21-1c16-4233-a399-e34c89240e6d","Type":"ContainerDied","Data":"f6585b5ff9d7dc5d65ea95dd64ad8fcf3ceaa4643da72497453a72a239205014"} Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.856000 4924 scope.go:117] "RemoveContainer" containerID="c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.855686 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.856938 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.857243 4924 status_manager.go:851] "Failed to get status for pod" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-p7n6k\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.857519 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.857785 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.857992 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.858296 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.869979 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.870546 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.871027 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.871353 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.871703 4924 status_manager.go:851] "Failed to get status for pod" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-p7n6k\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.871934 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.877539 4924 scope.go:117] "RemoveContainer" containerID="c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6" Dec 11 13:58:01 crc kubenswrapper[4924]: E1211 13:58:01.877868 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6\": container with ID starting with c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6 not found: ID does not exist" containerID="c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.877913 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6"} err="failed to get container status \"c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6\": rpc error: code = NotFound desc = could not find container \"c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6\": container with ID starting with c0b5acd387f6ec84f808f77fb8b9aff5deb1fb9be96b397e9af2279dc26ba1b6 not found: ID does not exist" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.934206 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.938241 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.938736 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.939092 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.939485 4924 status_manager.go:851] "Failed to get status for pod" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-p7n6k\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.939775 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.940130 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: I1211 13:58:01.940713 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:01 crc kubenswrapper[4924]: E1211 13:58:01.989525 4924 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.69:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18802dda0ab946e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:57:48.435269348 +0000 UTC m=+281.944750325,LastTimestamp:2025-12-11 13:57:48.435269348 +0000 UTC m=+281.944750325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:58:03 crc kubenswrapper[4924]: E1211 13:58:03.484543 4924 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.69:6443: connect: connection refused" interval="7s" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.782152 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.782775 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.784106 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.784491 4924 status_manager.go:851] "Failed to get status for pod" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-p7n6k\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.787556 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.787918 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.788173 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.800178 4924 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.800206 4924 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:03 crc kubenswrapper[4924]: E1211 13:58:03.800721 4924 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.801159 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:03 crc kubenswrapper[4924]: W1211 13:58:03.823017 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b94ed628bb2dc20ac62e92bc2aa2cd9df9d081edfbbb50ce8725e42e6feca7c5 WatchSource:0}: Error finding container b94ed628bb2dc20ac62e92bc2aa2cd9df9d081edfbbb50ce8725e42e6feca7c5: Status 404 returned error can't find the container with id b94ed628bb2dc20ac62e92bc2aa2cd9df9d081edfbbb50ce8725e42e6feca7c5 Dec 11 13:58:03 crc kubenswrapper[4924]: I1211 13:58:03.873302 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b94ed628bb2dc20ac62e92bc2aa2cd9df9d081edfbbb50ce8725e42e6feca7c5"} Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.879824 4924 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="03cb0583ec7ac06403f07de35b2757e7a060fcdf3ef4a73ca5a54ad49c28e899" exitCode=0 Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.879888 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"03cb0583ec7ac06403f07de35b2757e7a060fcdf3ef4a73ca5a54ad49c28e899"} Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.880063 4924 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.880123 4924 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:04 crc kubenswrapper[4924]: E1211 13:58:04.880508 4924 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.880537 4924 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.880789 4924 status_manager.go:851] "Failed to get status for pod" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.881000 4924 status_manager.go:851] "Failed to get status for pod" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" pod="openshift-authentication/oauth-openshift-558db77b4-p7n6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-p7n6k\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.881349 4924 status_manager.go:851] "Failed to get status for pod" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" pod="openshift-marketplace/redhat-operators-mfbdv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-mfbdv\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.881561 4924 status_manager.go:851] "Failed to get status for pod" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" pod="openshift-marketplace/redhat-operators-6xvm8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6xvm8\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:04 crc kubenswrapper[4924]: I1211 13:58:04.881721 4924 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.69:6443: connect: connection refused" Dec 11 13:58:06 crc kubenswrapper[4924]: I1211 13:58:06.662546 4924 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 11 13:58:06 crc kubenswrapper[4924]: I1211 13:58:06.893168 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a15b45870cc8f1ed878952603988314e28b391a5d5458960d416876a31b69963"} Dec 11 13:58:09 crc kubenswrapper[4924]: I1211 13:58:09.911787 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b5364eae94092dfe5c5d24666ec5d34f25e9397786862bfcce2228825169591"} Dec 11 13:58:09 crc kubenswrapper[4924]: I1211 13:58:09.912381 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef71f0175c737c077b94451689788290bee79d1c0ddac7059e9faabe527e1476"} Dec 11 13:58:09 crc kubenswrapper[4924]: I1211 13:58:09.912400 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fd8e2ba0d4fa97e1e907f11d62175ef2a3463b6f40aa6604df86bac4525e25ee"} Dec 11 13:58:10 crc kubenswrapper[4924]: I1211 13:58:10.921143 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"373ece1fb06da6be195983e32685ab60c81e8da9086aa7e56d5b03e79b54a161"} Dec 11 13:58:10 crc kubenswrapper[4924]: I1211 13:58:10.921501 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:10 crc kubenswrapper[4924]: I1211 13:58:10.921459 4924 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:10 crc kubenswrapper[4924]: I1211 13:58:10.921524 4924 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:10 crc kubenswrapper[4924]: I1211 13:58:10.929172 4924 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:11 crc kubenswrapper[4924]: I1211 13:58:11.217786 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:58:11 crc kubenswrapper[4924]: I1211 13:58:11.928578 4924 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:11 crc kubenswrapper[4924]: I1211 13:58:11.928625 4924 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:13 crc kubenswrapper[4924]: I1211 13:58:13.322381 4924 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="9d163f31-4be9-40da-b6e6-1dc24838a5a9" Dec 11 13:58:16 crc kubenswrapper[4924]: I1211 13:58:16.952962 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 11 13:58:16 crc kubenswrapper[4924]: I1211 13:58:16.954059 4924 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569" exitCode=1 Dec 11 13:58:16 crc kubenswrapper[4924]: I1211 13:58:16.954128 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569"} Dec 11 13:58:16 crc kubenswrapper[4924]: I1211 13:58:16.954771 4924 scope.go:117] "RemoveContainer" containerID="fd8bf5efe97c0a8d010cf95ba3eabc475c199956e2fb81c2b60a191d19220569" Dec 11 13:58:17 crc kubenswrapper[4924]: I1211 13:58:17.960167 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 11 13:58:17 crc kubenswrapper[4924]: I1211 13:58:17.960662 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4dab2405456a80740d21f1ea4b4762c513e80900cb5fa3747f3ffd4cfde1b6ea"} Dec 11 13:58:26 crc kubenswrapper[4924]: I1211 13:58:26.061391 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 13:58:26 crc kubenswrapper[4924]: I1211 13:58:26.843846 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 13:58:26 crc kubenswrapper[4924]: I1211 13:58:26.860416 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 13:58:27 crc kubenswrapper[4924]: I1211 13:58:27.228809 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 13:58:27 crc kubenswrapper[4924]: I1211 13:58:27.921236 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 13:58:28 crc kubenswrapper[4924]: I1211 13:58:28.148121 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 13:58:28 crc kubenswrapper[4924]: I1211 13:58:28.279408 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 13:58:28 crc kubenswrapper[4924]: I1211 13:58:28.545920 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 13:58:29 crc kubenswrapper[4924]: I1211 13:58:29.053947 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 13:58:29 crc kubenswrapper[4924]: I1211 13:58:29.316062 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 13:58:29 crc kubenswrapper[4924]: I1211 13:58:29.326709 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 13:58:29 crc kubenswrapper[4924]: I1211 13:58:29.729393 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 13:58:30 crc kubenswrapper[4924]: I1211 13:58:30.190014 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 13:58:30 crc kubenswrapper[4924]: I1211 13:58:30.276450 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 13:58:31 crc kubenswrapper[4924]: I1211 13:58:31.215360 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 13:58:31 crc kubenswrapper[4924]: I1211 13:58:31.280174 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 13:58:31 crc kubenswrapper[4924]: I1211 13:58:31.816731 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 13:58:31 crc kubenswrapper[4924]: I1211 13:58:31.932018 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 13:58:32 crc kubenswrapper[4924]: I1211 13:58:32.325295 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 13:58:32 crc kubenswrapper[4924]: I1211 13:58:32.588140 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 13:58:34 crc kubenswrapper[4924]: I1211 13:58:34.000211 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 13:58:34 crc kubenswrapper[4924]: I1211 13:58:34.285739 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 13:58:35 crc kubenswrapper[4924]: I1211 13:58:35.167833 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 13:58:35 crc kubenswrapper[4924]: I1211 13:58:35.254867 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 13:58:35 crc kubenswrapper[4924]: I1211 13:58:35.392946 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 13:58:35 crc kubenswrapper[4924]: I1211 13:58:35.700359 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 13:58:35 crc kubenswrapper[4924]: I1211 13:58:35.861040 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 13:58:36 crc kubenswrapper[4924]: I1211 13:58:36.083123 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 13:58:36 crc kubenswrapper[4924]: I1211 13:58:36.492442 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 13:58:36 crc kubenswrapper[4924]: I1211 13:58:36.499696 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 13:58:36 crc kubenswrapper[4924]: I1211 13:58:36.736463 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 13:58:37 crc kubenswrapper[4924]: I1211 13:58:37.390890 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 13:58:37 crc kubenswrapper[4924]: I1211 13:58:37.673813 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 13:58:38 crc kubenswrapper[4924]: I1211 13:58:38.161963 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:58:38 crc kubenswrapper[4924]: I1211 13:58:38.682490 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 13:58:44 crc kubenswrapper[4924]: I1211 13:58:44.825143 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 13:58:45 crc kubenswrapper[4924]: I1211 13:58:45.350071 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 13:58:45 crc kubenswrapper[4924]: I1211 13:58:45.728702 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 13:58:46 crc kubenswrapper[4924]: I1211 13:58:46.944670 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 13:58:46 crc kubenswrapper[4924]: I1211 13:58:46.979169 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 13:58:47 crc kubenswrapper[4924]: I1211 13:58:47.086982 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 13:58:47 crc kubenswrapper[4924]: I1211 13:58:47.239731 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 13:58:47 crc kubenswrapper[4924]: I1211 13:58:47.340452 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 13:58:47 crc kubenswrapper[4924]: I1211 13:58:47.651698 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.029717 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.088642 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.248253 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.270380 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.575087 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.661620 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 13:58:48 crc kubenswrapper[4924]: I1211 13:58:48.748054 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 13:58:49 crc kubenswrapper[4924]: I1211 13:58:49.111355 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 13:58:49 crc kubenswrapper[4924]: I1211 13:58:49.174523 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 13:58:49 crc kubenswrapper[4924]: I1211 13:58:49.210859 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 13:58:49 crc kubenswrapper[4924]: I1211 13:58:49.387682 4924 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 13:58:49 crc kubenswrapper[4924]: I1211 13:58:49.662885 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 13:58:49 crc kubenswrapper[4924]: I1211 13:58:49.856651 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 13:58:50 crc kubenswrapper[4924]: I1211 13:58:50.407041 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 13:58:50 crc kubenswrapper[4924]: I1211 13:58:50.444149 4924 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 13:58:50 crc kubenswrapper[4924]: I1211 13:58:50.586376 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 13:58:50 crc kubenswrapper[4924]: I1211 13:58:50.716499 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 13:58:51 crc kubenswrapper[4924]: I1211 13:58:51.381568 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 13:58:51 crc kubenswrapper[4924]: I1211 13:58:51.476178 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 13:58:51 crc kubenswrapper[4924]: I1211 13:58:51.647000 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 13:58:51 crc kubenswrapper[4924]: I1211 13:58:51.716806 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 13:58:51 crc kubenswrapper[4924]: I1211 13:58:51.867087 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 13:58:52 crc kubenswrapper[4924]: I1211 13:58:52.467817 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 13:58:52 crc kubenswrapper[4924]: I1211 13:58:52.522289 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 13:58:52 crc kubenswrapper[4924]: I1211 13:58:52.672115 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.028292 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.200263 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.223763 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.349045 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.495814 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.718191 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.837481 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 13:58:53 crc kubenswrapper[4924]: I1211 13:58:53.850122 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.185500 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.323095 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.326506 4924 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.330314 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=75.330294316 podStartE2EDuration="1m15.330294316s" podCreationTimestamp="2025-12-11 13:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:58:13.242181785 +0000 UTC m=+306.751662762" watchObservedRunningTime="2025-12-11 13:58:54.330294316 +0000 UTC m=+347.839775293" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.334647 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-p7n6k"] Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.334937 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.335416 4924 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.335510 4924 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ac2d7ff-9d46-4fe3-a299-9238182e04fb" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.349907 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.350738 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.363219 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=44.363202079 podStartE2EDuration="44.363202079s" podCreationTimestamp="2025-12-11 13:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:58:54.361964024 +0000 UTC m=+347.871445001" watchObservedRunningTime="2025-12-11 13:58:54.363202079 +0000 UTC m=+347.872683056" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.492883 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.549723 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.566354 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.693616 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.723527 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.788680 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" path="/var/lib/kubelet/pods/82d94e21-1c16-4233-a399-e34c89240e6d/volumes" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.897397 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 13:58:54 crc kubenswrapper[4924]: I1211 13:58:54.960797 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 13:58:55 crc kubenswrapper[4924]: I1211 13:58:55.046555 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 13:58:55 crc kubenswrapper[4924]: I1211 13:58:55.170376 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 13:58:55 crc kubenswrapper[4924]: I1211 13:58:55.342194 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 13:58:55 crc kubenswrapper[4924]: I1211 13:58:55.447299 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 13:58:55 crc kubenswrapper[4924]: I1211 13:58:55.512232 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 13:58:55 crc kubenswrapper[4924]: I1211 13:58:55.515671 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.118422 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.131591 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.498781 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.519577 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.730716 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.767164 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.849273 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 13:58:56 crc kubenswrapper[4924]: I1211 13:58:56.855422 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.166941 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.184599 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.193786 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.274042 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.275565 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.287814 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.507384 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.665042 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 13:58:57 crc kubenswrapper[4924]: I1211 13:58:57.832013 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.048523 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.271197 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.528844 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.634760 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.802096 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.802199 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:58 crc kubenswrapper[4924]: I1211 13:58:58.806651 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.170281 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.179987 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.194815 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.253221 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.263273 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd"] Dec 11 13:58:59 crc kubenswrapper[4924]: E1211 13:58:59.263550 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" containerName="oauth-openshift" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.263576 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" containerName="oauth-openshift" Dec 11 13:58:59 crc kubenswrapper[4924]: E1211 13:58:59.263595 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" containerName="installer" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.263603 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" containerName="installer" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.263709 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="190c38aa-6c11-4635-a579-fc85fa4e367a" containerName="installer" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.263732 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="82d94e21-1c16-4233-a399-e34c89240e6d" containerName="oauth-openshift" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.264167 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.277603 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.277837 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.277908 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.277983 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.278179 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.278450 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.278583 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.278631 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.279109 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.280000 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.280214 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.280341 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.281966 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd"] Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.282565 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.292348 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.295029 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.299879 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.370876 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.370974 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn72p\" (UniqueName: \"kubernetes.io/projected/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-kube-api-access-jn72p\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371023 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371049 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371115 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371176 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371213 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371234 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371259 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371348 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371383 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371406 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371430 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.371453 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473130 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473202 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn72p\" (UniqueName: \"kubernetes.io/projected/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-kube-api-access-jn72p\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473242 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473269 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473307 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473353 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473379 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473401 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473428 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473463 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473498 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473528 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.473707 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-audit-dir\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.474492 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-audit-policies\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.474634 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.474687 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.475141 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-service-ca\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.475152 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.477455 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.488250 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-router-certs\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.488274 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-error\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.488396 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.488774 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.489457 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-session\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.489579 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-template-login\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.489680 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.489676 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.496624 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn72p\" (UniqueName: \"kubernetes.io/projected/7dcb93f0-e374-48ed-b8d3-a32c87d7822e-kube-api-access-jn72p\") pod \"oauth-openshift-9fbfc7dc4-chdtd\" (UID: \"7dcb93f0-e374-48ed-b8d3-a32c87d7822e\") " pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.581447 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.770158 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 13:58:59 crc kubenswrapper[4924]: I1211 13:58:59.795842 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd"] Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.087533 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.196898 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" event={"ID":"7dcb93f0-e374-48ed-b8d3-a32c87d7822e","Type":"ContainerStarted","Data":"8c98a5f3df7799639e989174329ca72fc163b0803d7b2cf5fad390c18228a7c9"} Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.202071 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.239301 4924 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.338888 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.382089 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.492927 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.640543 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.716603 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.808739 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.902262 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 13:59:00 crc kubenswrapper[4924]: I1211 13:59:00.964718 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.196238 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.259802 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.277084 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.289477 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.547305 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.926048 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 13:59:01 crc kubenswrapper[4924]: I1211 13:59:01.952743 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.173846 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.210388 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" event={"ID":"7dcb93f0-e374-48ed-b8d3-a32c87d7822e","Type":"ContainerStarted","Data":"e3026f0edea2b183fe64eb901b10458cbf380697d6365bee24295bc201e5400f"} Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.210716 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.237096 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" podStartSLOduration=87.237067777 podStartE2EDuration="1m27.237067777s" podCreationTimestamp="2025-12-11 13:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:59:02.230195028 +0000 UTC m=+355.739676025" watchObservedRunningTime="2025-12-11 13:59:02.237067777 +0000 UTC m=+355.746548774" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.313418 4924 patch_prober.go:28] interesting pod/oauth-openshift-9fbfc7dc4-chdtd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": read tcp 10.217.0.2:34258->10.217.0.57:6443: read: connection reset by peer" start-of-body= Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.313473 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" podUID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": read tcp 10.217.0.2:34258->10.217.0.57:6443: read: connection reset by peer" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.347508 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.465090 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.569190 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.634104 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.647829 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.737282 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.999124 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 13:59:02 crc kubenswrapper[4924]: I1211 13:59:02.999350 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.115495 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.216522 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fbfc7dc4-chdtd_7dcb93f0-e374-48ed-b8d3-a32c87d7822e/oauth-openshift/0.log" Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.216571 4924 generic.go:334] "Generic (PLEG): container finished" podID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" containerID="e3026f0edea2b183fe64eb901b10458cbf380697d6365bee24295bc201e5400f" exitCode=255 Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.216602 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" event={"ID":"7dcb93f0-e374-48ed-b8d3-a32c87d7822e","Type":"ContainerDied","Data":"e3026f0edea2b183fe64eb901b10458cbf380697d6365bee24295bc201e5400f"} Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.217310 4924 scope.go:117] "RemoveContainer" containerID="e3026f0edea2b183fe64eb901b10458cbf380697d6365bee24295bc201e5400f" Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.508540 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.532390 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 13:59:03 crc kubenswrapper[4924]: I1211 13:59:03.647198 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.022918 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.138374 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.224307 4924 generic.go:334] "Generic (PLEG): container finished" podID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerID="e74bd6508a2a26a4928d2df3576043ee518d19da2b39c07e7d3fb632b1eaa428" exitCode=0 Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.224457 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerDied","Data":"e74bd6508a2a26a4928d2df3576043ee518d19da2b39c07e7d3fb632b1eaa428"} Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.224995 4924 scope.go:117] "RemoveContainer" containerID="e74bd6508a2a26a4928d2df3576043ee518d19da2b39c07e7d3fb632b1eaa428" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.229731 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fbfc7dc4-chdtd_7dcb93f0-e374-48ed-b8d3-a32c87d7822e/oauth-openshift/1.log" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.230962 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fbfc7dc4-chdtd_7dcb93f0-e374-48ed-b8d3-a32c87d7822e/oauth-openshift/0.log" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.231009 4924 generic.go:334] "Generic (PLEG): container finished" podID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" containerID="25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e" exitCode=255 Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.231036 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" event={"ID":"7dcb93f0-e374-48ed-b8d3-a32c87d7822e","Type":"ContainerDied","Data":"25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e"} Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.231068 4924 scope.go:117] "RemoveContainer" containerID="e3026f0edea2b183fe64eb901b10458cbf380697d6365bee24295bc201e5400f" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.231565 4924 scope.go:117] "RemoveContainer" containerID="25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e" Dec 11 13:59:04 crc kubenswrapper[4924]: E1211 13:59:04.231781 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fbfc7dc4-chdtd_openshift-authentication(7dcb93f0-e374-48ed-b8d3-a32c87d7822e)\"" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" podUID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.327400 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.327415 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.500207 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.598977 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.633933 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.710279 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 13:59:04 crc kubenswrapper[4924]: I1211 13:59:04.854548 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.241977 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dvnc9_6576a4b8-18f3-4084-ae2e-7564ac2f59a1/marketplace-operator/1.log" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.242458 4924 generic.go:334] "Generic (PLEG): container finished" podID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerID="8e7b4622db8cb74b0b2d8f6ff19e27d7069ee727f28079379a9548d363c53e48" exitCode=1 Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.242524 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerDied","Data":"8e7b4622db8cb74b0b2d8f6ff19e27d7069ee727f28079379a9548d363c53e48"} Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.242556 4924 scope.go:117] "RemoveContainer" containerID="e74bd6508a2a26a4928d2df3576043ee518d19da2b39c07e7d3fb632b1eaa428" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.243019 4924 scope.go:117] "RemoveContainer" containerID="8e7b4622db8cb74b0b2d8f6ff19e27d7069ee727f28079379a9548d363c53e48" Dec 11 13:59:05 crc kubenswrapper[4924]: E1211 13:59:05.243295 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-dvnc9_openshift-marketplace(6576a4b8-18f3-4084-ae2e-7564ac2f59a1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.245721 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fbfc7dc4-chdtd_7dcb93f0-e374-48ed-b8d3-a32c87d7822e/oauth-openshift/1.log" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.245976 4924 scope.go:117] "RemoveContainer" containerID="25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e" Dec 11 13:59:05 crc kubenswrapper[4924]: E1211 13:59:05.246101 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fbfc7dc4-chdtd_openshift-authentication(7dcb93f0-e374-48ed-b8d3-a32c87d7822e)\"" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" podUID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.308262 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.566635 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 13:59:05 crc kubenswrapper[4924]: I1211 13:59:05.698292 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.252690 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dvnc9_6576a4b8-18f3-4084-ae2e-7564ac2f59a1/marketplace-operator/1.log" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.263167 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.348451 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.401919 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.470293 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.538442 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.589278 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.770920 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.793912 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.838899 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.838965 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.839722 4924 scope.go:117] "RemoveContainer" containerID="8e7b4622db8cb74b0b2d8f6ff19e27d7069ee727f28079379a9548d363c53e48" Dec 11 13:59:06 crc kubenswrapper[4924]: E1211 13:59:06.840129 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-dvnc9_openshift-marketplace(6576a4b8-18f3-4084-ae2e-7564ac2f59a1)\"" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" Dec 11 13:59:06 crc kubenswrapper[4924]: I1211 13:59:06.968427 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.175136 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.215089 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.278835 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.365149 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.408004 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.490734 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 13:59:07 crc kubenswrapper[4924]: I1211 13:59:07.730635 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.101566 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.205986 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.248487 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.433709 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.620421 4924 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.620669 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b" gracePeriod=5 Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.623539 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.659226 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.828289 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 13:59:08 crc kubenswrapper[4924]: I1211 13:59:08.835248 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.440913 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.582090 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.583310 4924 scope.go:117] "RemoveContainer" containerID="25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e" Dec 11 13:59:09 crc kubenswrapper[4924]: E1211 13:59:09.583662 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fbfc7dc4-chdtd_openshift-authentication(7dcb93f0-e374-48ed-b8d3-a32c87d7822e)\"" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" podUID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.584260 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.647605 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.713865 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.778857 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 13:59:09 crc kubenswrapper[4924]: I1211 13:59:09.861371 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.273130 4924 scope.go:117] "RemoveContainer" containerID="25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e" Dec 11 13:59:10 crc kubenswrapper[4924]: E1211 13:59:10.273479 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-9fbfc7dc4-chdtd_openshift-authentication(7dcb93f0-e374-48ed-b8d3-a32c87d7822e)\"" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" podUID="7dcb93f0-e374-48ed-b8d3-a32c87d7822e" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.290534 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.484598 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.542227 4924 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.551186 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.617665 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.630861 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.637230 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.719870 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.733782 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 13:59:10 crc kubenswrapper[4924]: I1211 13:59:10.980755 4924 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.157589 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.487261 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.487406 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.804810 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.821012 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.940950 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.952964 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 13:59:11 crc kubenswrapper[4924]: I1211 13:59:11.994903 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.040617 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.068247 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.234651 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.431596 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.545261 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.678241 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.770167 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 13:59:12 crc kubenswrapper[4924]: I1211 13:59:12.883089 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 13:59:13 crc kubenswrapper[4924]: I1211 13:59:13.239113 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 13:59:13 crc kubenswrapper[4924]: I1211 13:59:13.365469 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 13:59:13 crc kubenswrapper[4924]: I1211 13:59:13.734120 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.089136 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.090607 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.196758 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.196843 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.294915 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.294964 4924 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b" exitCode=137 Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.295004 4924 scope.go:117] "RemoveContainer" containerID="6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.295067 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.310121 4924 scope.go:117] "RemoveContainer" containerID="6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b" Dec 11 13:59:14 crc kubenswrapper[4924]: E1211 13:59:14.310573 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b\": container with ID starting with 6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b not found: ID does not exist" containerID="6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.310601 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b"} err="failed to get container status \"6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b\": rpc error: code = NotFound desc = could not find container \"6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b\": container with ID starting with 6ea9a3fe7735bbb6f2ee32cde66ea4374ac1f8da884c95450b6411d471f6162b not found: ID does not exist" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367129 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367222 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367262 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367290 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367346 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367359 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367363 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367433 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367464 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367846 4924 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367863 4924 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367872 4924 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.367882 4924 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.374956 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.469012 4924 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.490312 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.786073 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.789739 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.790101 4924 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.803533 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.803581 4924 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="83a17ac5-b5a8-4863-891f-27aafb49551f" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.807714 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.807780 4924 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="83a17ac5-b5a8-4863-891f-27aafb49551f" Dec 11 13:59:14 crc kubenswrapper[4924]: I1211 13:59:14.813128 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 13:59:15 crc kubenswrapper[4924]: I1211 13:59:15.279472 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 13:59:15 crc kubenswrapper[4924]: I1211 13:59:15.433314 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:59:15 crc kubenswrapper[4924]: I1211 13:59:15.433689 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:59:15 crc kubenswrapper[4924]: I1211 13:59:15.956016 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 13:59:16 crc kubenswrapper[4924]: I1211 13:59:16.217925 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 13:59:16 crc kubenswrapper[4924]: I1211 13:59:16.313053 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 13:59:16 crc kubenswrapper[4924]: I1211 13:59:16.999242 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 13:59:17 crc kubenswrapper[4924]: I1211 13:59:17.763517 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 13:59:17 crc kubenswrapper[4924]: I1211 13:59:17.795191 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 13:59:17 crc kubenswrapper[4924]: I1211 13:59:17.796463 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 13:59:18 crc kubenswrapper[4924]: I1211 13:59:18.186656 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 13:59:18 crc kubenswrapper[4924]: I1211 13:59:18.533110 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 13:59:18 crc kubenswrapper[4924]: I1211 13:59:18.783493 4924 scope.go:117] "RemoveContainer" containerID="8e7b4622db8cb74b0b2d8f6ff19e27d7069ee727f28079379a9548d363c53e48" Dec 11 13:59:20 crc kubenswrapper[4924]: I1211 13:59:20.332866 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dvnc9_6576a4b8-18f3-4084-ae2e-7564ac2f59a1/marketplace-operator/1.log" Dec 11 13:59:20 crc kubenswrapper[4924]: I1211 13:59:20.333243 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerStarted","Data":"25f788c079877304957f6db88ac395b470d3f7b4b9545b0e6ab9ae182a732faf"} Dec 11 13:59:20 crc kubenswrapper[4924]: I1211 13:59:20.333839 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:59:20 crc kubenswrapper[4924]: I1211 13:59:20.338386 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 13:59:20 crc kubenswrapper[4924]: I1211 13:59:20.787798 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 13:59:25 crc kubenswrapper[4924]: I1211 13:59:25.783098 4924 scope.go:117] "RemoveContainer" containerID="25cdea7470048bdfcc07c39f91e9f4ded63e6c2676047e8421babd7c939a129e" Dec 11 13:59:26 crc kubenswrapper[4924]: I1211 13:59:26.364377 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-9fbfc7dc4-chdtd_7dcb93f0-e374-48ed-b8d3-a32c87d7822e/oauth-openshift/1.log" Dec 11 13:59:26 crc kubenswrapper[4924]: I1211 13:59:26.364459 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" event={"ID":"7dcb93f0-e374-48ed-b8d3-a32c87d7822e","Type":"ContainerStarted","Data":"1101941572eafb365ef84e6c8d984d73344583362a36e3fd1dc41f77a9d74684"} Dec 11 13:59:26 crc kubenswrapper[4924]: I1211 13:59:26.364937 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:59:26 crc kubenswrapper[4924]: I1211 13:59:26.568489 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9fbfc7dc4-chdtd" Dec 11 13:59:45 crc kubenswrapper[4924]: I1211 13:59:45.433473 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:59:45 crc kubenswrapper[4924]: I1211 13:59:45.433922 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:59:50 crc kubenswrapper[4924]: I1211 13:59:50.410825 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzzhz"] Dec 11 13:59:50 crc kubenswrapper[4924]: I1211 13:59:50.411031 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" podUID="bfceb2d6-c7d2-4447-b56d-b71db58955eb" containerName="controller-manager" containerID="cri-o://74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5" gracePeriod=30 Dec 11 13:59:50 crc kubenswrapper[4924]: I1211 13:59:50.509595 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c"] Dec 11 13:59:50 crc kubenswrapper[4924]: I1211 13:59:50.509819 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" podUID="b96c821c-a977-4a01-91d1-9c7df59ae49b" containerName="route-controller-manager" containerID="cri-o://b6ee0ec4ebac346a9154663f8d0f53c91ea9da9fdd7d51b011f37f8d90a8fd62" gracePeriod=30 Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.296306 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.439651 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk4dh\" (UniqueName: \"kubernetes.io/projected/bfceb2d6-c7d2-4447-b56d-b71db58955eb-kube-api-access-zk4dh\") pod \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.439695 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfceb2d6-c7d2-4447-b56d-b71db58955eb-serving-cert\") pod \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.439719 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-proxy-ca-bundles\") pod \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.439738 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-client-ca\") pod \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.439796 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-config\") pod \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\" (UID: \"bfceb2d6-c7d2-4447-b56d-b71db58955eb\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.440618 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-client-ca" (OuterVolumeSpecName: "client-ca") pod "bfceb2d6-c7d2-4447-b56d-b71db58955eb" (UID: "bfceb2d6-c7d2-4447-b56d-b71db58955eb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.440686 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-config" (OuterVolumeSpecName: "config") pod "bfceb2d6-c7d2-4447-b56d-b71db58955eb" (UID: "bfceb2d6-c7d2-4447-b56d-b71db58955eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.441142 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bfceb2d6-c7d2-4447-b56d-b71db58955eb" (UID: "bfceb2d6-c7d2-4447-b56d-b71db58955eb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.449283 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfceb2d6-c7d2-4447-b56d-b71db58955eb-kube-api-access-zk4dh" (OuterVolumeSpecName: "kube-api-access-zk4dh") pod "bfceb2d6-c7d2-4447-b56d-b71db58955eb" (UID: "bfceb2d6-c7d2-4447-b56d-b71db58955eb"). InnerVolumeSpecName "kube-api-access-zk4dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.449586 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfceb2d6-c7d2-4447-b56d-b71db58955eb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bfceb2d6-c7d2-4447-b56d-b71db58955eb" (UID: "bfceb2d6-c7d2-4447-b56d-b71db58955eb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.508490 4924 generic.go:334] "Generic (PLEG): container finished" podID="b96c821c-a977-4a01-91d1-9c7df59ae49b" containerID="b6ee0ec4ebac346a9154663f8d0f53c91ea9da9fdd7d51b011f37f8d90a8fd62" exitCode=0 Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.508550 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" event={"ID":"b96c821c-a977-4a01-91d1-9c7df59ae49b","Type":"ContainerDied","Data":"b6ee0ec4ebac346a9154663f8d0f53c91ea9da9fdd7d51b011f37f8d90a8fd62"} Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.510035 4924 generic.go:334] "Generic (PLEG): container finished" podID="bfceb2d6-c7d2-4447-b56d-b71db58955eb" containerID="74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5" exitCode=0 Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.510058 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" event={"ID":"bfceb2d6-c7d2-4447-b56d-b71db58955eb","Type":"ContainerDied","Data":"74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5"} Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.510072 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" event={"ID":"bfceb2d6-c7d2-4447-b56d-b71db58955eb","Type":"ContainerDied","Data":"26d4f08c9a5232d200da5ff969ee0fa4144dc8454ed41f0da97d71ee18262b1f"} Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.510086 4924 scope.go:117] "RemoveContainer" containerID="74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.510177 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nzzhz" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.518623 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.543317 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.543362 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk4dh\" (UniqueName: \"kubernetes.io/projected/bfceb2d6-c7d2-4447-b56d-b71db58955eb-kube-api-access-zk4dh\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.543373 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfceb2d6-c7d2-4447-b56d-b71db58955eb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.543382 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.543391 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bfceb2d6-c7d2-4447-b56d-b71db58955eb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.548647 4924 scope.go:117] "RemoveContainer" containerID="74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5" Dec 11 13:59:51 crc kubenswrapper[4924]: E1211 13:59:51.549418 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5\": container with ID starting with 74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5 not found: ID does not exist" containerID="74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.549478 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5"} err="failed to get container status \"74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5\": rpc error: code = NotFound desc = could not find container \"74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5\": container with ID starting with 74df726cb33dfa93daba3b960e31777b8c03e919c5f10b778d262d79a6e873d5 not found: ID does not exist" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.567593 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzzhz"] Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.571356 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nzzhz"] Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.644669 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96c821c-a977-4a01-91d1-9c7df59ae49b-serving-cert\") pod \"b96c821c-a977-4a01-91d1-9c7df59ae49b\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.644802 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjp6p\" (UniqueName: \"kubernetes.io/projected/b96c821c-a977-4a01-91d1-9c7df59ae49b-kube-api-access-zjp6p\") pod \"b96c821c-a977-4a01-91d1-9c7df59ae49b\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.644867 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-config\") pod \"b96c821c-a977-4a01-91d1-9c7df59ae49b\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.644904 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-client-ca\") pod \"b96c821c-a977-4a01-91d1-9c7df59ae49b\" (UID: \"b96c821c-a977-4a01-91d1-9c7df59ae49b\") " Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.645941 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-client-ca" (OuterVolumeSpecName: "client-ca") pod "b96c821c-a977-4a01-91d1-9c7df59ae49b" (UID: "b96c821c-a977-4a01-91d1-9c7df59ae49b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.646040 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-config" (OuterVolumeSpecName: "config") pod "b96c821c-a977-4a01-91d1-9c7df59ae49b" (UID: "b96c821c-a977-4a01-91d1-9c7df59ae49b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.648274 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96c821c-a977-4a01-91d1-9c7df59ae49b-kube-api-access-zjp6p" (OuterVolumeSpecName: "kube-api-access-zjp6p") pod "b96c821c-a977-4a01-91d1-9c7df59ae49b" (UID: "b96c821c-a977-4a01-91d1-9c7df59ae49b"). InnerVolumeSpecName "kube-api-access-zjp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.648898 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96c821c-a977-4a01-91d1-9c7df59ae49b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b96c821c-a977-4a01-91d1-9c7df59ae49b" (UID: "b96c821c-a977-4a01-91d1-9c7df59ae49b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.745951 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.746002 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b96c821c-a977-4a01-91d1-9c7df59ae49b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.746012 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjp6p\" (UniqueName: \"kubernetes.io/projected/b96c821c-a977-4a01-91d1-9c7df59ae49b-kube-api-access-zjp6p\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.746023 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b96c821c-a977-4a01-91d1-9c7df59ae49b-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.892316 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp"] Dec 11 13:59:51 crc kubenswrapper[4924]: E1211 13:59:51.892977 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfceb2d6-c7d2-4447-b56d-b71db58955eb" containerName="controller-manager" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.893078 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfceb2d6-c7d2-4447-b56d-b71db58955eb" containerName="controller-manager" Dec 11 13:59:51 crc kubenswrapper[4924]: E1211 13:59:51.893161 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96c821c-a977-4a01-91d1-9c7df59ae49b" containerName="route-controller-manager" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.893234 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96c821c-a977-4a01-91d1-9c7df59ae49b" containerName="route-controller-manager" Dec 11 13:59:51 crc kubenswrapper[4924]: E1211 13:59:51.893306 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.893379 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.893543 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfceb2d6-c7d2-4447-b56d-b71db58955eb" containerName="controller-manager" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.893670 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96c821c-a977-4a01-91d1-9c7df59ae49b" containerName="route-controller-manager" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.893740 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.894365 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.896147 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-688787759f-tlct9"] Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.896802 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.901877 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.902366 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.902533 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.902845 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.903042 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.904607 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.908891 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-688787759f-tlct9"] Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.909550 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.911699 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp"] Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.948807 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c2762-545e-4330-9b6e-bbce1f5842e7-serving-cert\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.948868 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb117b4-fce3-47a0-ab91-243ebbbfe764-serving-cert\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.948905 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-proxy-ca-bundles\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.948934 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-config\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.948956 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-config\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.948984 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtq6k\" (UniqueName: \"kubernetes.io/projected/afb117b4-fce3-47a0-ab91-243ebbbfe764-kube-api-access-qtq6k\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.949003 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hbgh\" (UniqueName: \"kubernetes.io/projected/eb0c2762-545e-4330-9b6e-bbce1f5842e7-kube-api-access-8hbgh\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.949026 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-client-ca\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:51 crc kubenswrapper[4924]: I1211 13:59:51.949059 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-client-ca\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.049994 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-client-ca\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050070 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c2762-545e-4330-9b6e-bbce1f5842e7-serving-cert\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050095 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb117b4-fce3-47a0-ab91-243ebbbfe764-serving-cert\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050119 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-proxy-ca-bundles\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050142 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-config\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050159 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-config\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050179 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtq6k\" (UniqueName: \"kubernetes.io/projected/afb117b4-fce3-47a0-ab91-243ebbbfe764-kube-api-access-qtq6k\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050195 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hbgh\" (UniqueName: \"kubernetes.io/projected/eb0c2762-545e-4330-9b6e-bbce1f5842e7-kube-api-access-8hbgh\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.050210 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-client-ca\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.051194 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-client-ca\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.051190 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-client-ca\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.052646 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-config\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.053741 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-config\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.053757 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-proxy-ca-bundles\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.057183 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c2762-545e-4330-9b6e-bbce1f5842e7-serving-cert\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.069551 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb117b4-fce3-47a0-ab91-243ebbbfe764-serving-cert\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.074497 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hbgh\" (UniqueName: \"kubernetes.io/projected/eb0c2762-545e-4330-9b6e-bbce1f5842e7-kube-api-access-8hbgh\") pod \"route-controller-manager-578744fd49-hjjcp\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.074539 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtq6k\" (UniqueName: \"kubernetes.io/projected/afb117b4-fce3-47a0-ab91-243ebbbfe764-kube-api-access-qtq6k\") pod \"controller-manager-688787759f-tlct9\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.220862 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.227261 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.418547 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp"] Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.455970 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-688787759f-tlct9"] Dec 11 13:59:52 crc kubenswrapper[4924]: W1211 13:59:52.475198 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafb117b4_fce3_47a0_ab91_243ebbbfe764.slice/crio-4e9db0a7958040435c0b90ed90f7c1a1f88a66c4a366c486d8e365c3606c8704 WatchSource:0}: Error finding container 4e9db0a7958040435c0b90ed90f7c1a1f88a66c4a366c486d8e365c3606c8704: Status 404 returned error can't find the container with id 4e9db0a7958040435c0b90ed90f7c1a1f88a66c4a366c486d8e365c3606c8704 Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.521797 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" event={"ID":"afb117b4-fce3-47a0-ab91-243ebbbfe764","Type":"ContainerStarted","Data":"4e9db0a7958040435c0b90ed90f7c1a1f88a66c4a366c486d8e365c3606c8704"} Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.524582 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" event={"ID":"eb0c2762-545e-4330-9b6e-bbce1f5842e7","Type":"ContainerStarted","Data":"d5177e98f9bb7d61fe0642263fbfdfc687172dc246c7b076a2cf5a8cd2dd1108"} Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.525751 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" event={"ID":"b96c821c-a977-4a01-91d1-9c7df59ae49b","Type":"ContainerDied","Data":"ff823afea1655473999eafdbeedd208c8e12d224d8b06ae01fd21b4eefde5f6e"} Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.525798 4924 scope.go:117] "RemoveContainer" containerID="b6ee0ec4ebac346a9154663f8d0f53c91ea9da9fdd7d51b011f37f8d90a8fd62" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.525879 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.582185 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c"] Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.585201 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-sc25c"] Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.790023 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96c821c-a977-4a01-91d1-9c7df59ae49b" path="/var/lib/kubelet/pods/b96c821c-a977-4a01-91d1-9c7df59ae49b/volumes" Dec 11 13:59:52 crc kubenswrapper[4924]: I1211 13:59:52.790849 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfceb2d6-c7d2-4447-b56d-b71db58955eb" path="/var/lib/kubelet/pods/bfceb2d6-c7d2-4447-b56d-b71db58955eb/volumes" Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.531510 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" event={"ID":"eb0c2762-545e-4330-9b6e-bbce1f5842e7","Type":"ContainerStarted","Data":"de8950371c9bdbfcefb1d529004eb68e3e9c1dead6abd1cd7e097a2abc6663eb"} Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.531735 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.534361 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" event={"ID":"afb117b4-fce3-47a0-ab91-243ebbbfe764","Type":"ContainerStarted","Data":"f80ab2a30fbd6dc4c3282e5d445182cf5a743a5cde5a71825ea518a4361045d7"} Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.534539 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.539355 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.578007 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" podStartSLOduration=3.577989322 podStartE2EDuration="3.577989322s" podCreationTimestamp="2025-12-11 13:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:59:53.554682366 +0000 UTC m=+407.064163343" watchObservedRunningTime="2025-12-11 13:59:53.577989322 +0000 UTC m=+407.087470289" Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.694663 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 13:59:53 crc kubenswrapper[4924]: I1211 13:59:53.716572 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" podStartSLOduration=3.71655089 podStartE2EDuration="3.71655089s" podCreationTimestamp="2025-12-11 13:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:59:53.587135097 +0000 UTC m=+407.096616074" watchObservedRunningTime="2025-12-11 13:59:53.71655089 +0000 UTC m=+407.226031887" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.175113 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87"] Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.176402 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.178233 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.185875 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.186595 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87"] Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.267860 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdvw\" (UniqueName: \"kubernetes.io/projected/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-kube-api-access-jrdvw\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.267924 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-secret-volume\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.267971 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-config-volume\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.369468 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-config-volume\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.369561 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdvw\" (UniqueName: \"kubernetes.io/projected/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-kube-api-access-jrdvw\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.369585 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-secret-volume\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.370403 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-config-volume\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.380570 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-secret-volume\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.392689 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdvw\" (UniqueName: \"kubernetes.io/projected/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-kube-api-access-jrdvw\") pod \"collect-profiles-29424360-5bg87\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.452947 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-688787759f-tlct9"] Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.453161 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" podUID="afb117b4-fce3-47a0-ab91-243ebbbfe764" containerName="controller-manager" containerID="cri-o://f80ab2a30fbd6dc4c3282e5d445182cf5a743a5cde5a71825ea518a4361045d7" gracePeriod=30 Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.457399 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp"] Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.457587 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" podUID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" containerName="route-controller-manager" containerID="cri-o://de8950371c9bdbfcefb1d529004eb68e3e9c1dead6abd1cd7e097a2abc6663eb" gracePeriod=30 Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.494633 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:00 crc kubenswrapper[4924]: I1211 14:00:00.913604 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87"] Dec 11 14:00:01 crc kubenswrapper[4924]: I1211 14:00:01.572534 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" event={"ID":"3fbbd1ea-1176-4561-b12f-9c3bad018c0a","Type":"ContainerStarted","Data":"9f32379e85960c34092b415ca70cd81cb47689fef837f8703316c0b64a78489a"} Dec 11 14:00:02 crc kubenswrapper[4924]: I1211 14:00:02.222005 4924 patch_prober.go:28] interesting pod/route-controller-manager-578744fd49-hjjcp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Dec 11 14:00:02 crc kubenswrapper[4924]: I1211 14:00:02.222483 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" podUID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Dec 11 14:00:02 crc kubenswrapper[4924]: I1211 14:00:02.229697 4924 patch_prober.go:28] interesting pod/controller-manager-688787759f-tlct9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Dec 11 14:00:02 crc kubenswrapper[4924]: I1211 14:00:02.229755 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" podUID="afb117b4-fce3-47a0-ab91-243ebbbfe764" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Dec 11 14:00:05 crc kubenswrapper[4924]: I1211 14:00:05.097275 4924 generic.go:334] "Generic (PLEG): container finished" podID="afb117b4-fce3-47a0-ab91-243ebbbfe764" containerID="f80ab2a30fbd6dc4c3282e5d445182cf5a743a5cde5a71825ea518a4361045d7" exitCode=0 Dec 11 14:00:05 crc kubenswrapper[4924]: I1211 14:00:05.097367 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" event={"ID":"afb117b4-fce3-47a0-ab91-243ebbbfe764","Type":"ContainerDied","Data":"f80ab2a30fbd6dc4c3282e5d445182cf5a743a5cde5a71825ea518a4361045d7"} Dec 11 14:00:05 crc kubenswrapper[4924]: I1211 14:00:05.099290 4924 generic.go:334] "Generic (PLEG): container finished" podID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" containerID="de8950371c9bdbfcefb1d529004eb68e3e9c1dead6abd1cd7e097a2abc6663eb" exitCode=0 Dec 11 14:00:05 crc kubenswrapper[4924]: I1211 14:00:05.099314 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" event={"ID":"eb0c2762-545e-4330-9b6e-bbce1f5842e7","Type":"ContainerDied","Data":"de8950371c9bdbfcefb1d529004eb68e3e9c1dead6abd1cd7e097a2abc6663eb"} Dec 11 14:00:06 crc kubenswrapper[4924]: I1211 14:00:06.106611 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" event={"ID":"3fbbd1ea-1176-4561-b12f-9c3bad018c0a","Type":"ContainerStarted","Data":"286eaba75ee49527475a8e2c1a70319fb7dc1c5db7d8176da1dd0db06d8978d8"} Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.111899 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" event={"ID":"eb0c2762-545e-4330-9b6e-bbce1f5842e7","Type":"ContainerDied","Data":"d5177e98f9bb7d61fe0642263fbfdfc687172dc246c7b076a2cf5a8cd2dd1108"} Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.112150 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5177e98f9bb7d61fe0642263fbfdfc687172dc246c7b076a2cf5a8cd2dd1108" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.125611 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.151258 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b"] Dec 11 14:00:07 crc kubenswrapper[4924]: E1211 14:00:07.151671 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" containerName="route-controller-manager" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.151737 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" containerName="route-controller-manager" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.151972 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" containerName="route-controller-manager" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.152387 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.160925 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b"] Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.172036 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-config\") pod \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.172265 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c2762-545e-4330-9b6e-bbce1f5842e7-serving-cert\") pod \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.172449 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hbgh\" (UniqueName: \"kubernetes.io/projected/eb0c2762-545e-4330-9b6e-bbce1f5842e7-kube-api-access-8hbgh\") pod \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.172589 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-client-ca\") pod \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\" (UID: \"eb0c2762-545e-4330-9b6e-bbce1f5842e7\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.172804 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-config" (OuterVolumeSpecName: "config") pod "eb0c2762-545e-4330-9b6e-bbce1f5842e7" (UID: "eb0c2762-545e-4330-9b6e-bbce1f5842e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.173019 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.173078 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb0c2762-545e-4330-9b6e-bbce1f5842e7" (UID: "eb0c2762-545e-4330-9b6e-bbce1f5842e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.177207 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0c2762-545e-4330-9b6e-bbce1f5842e7-kube-api-access-8hbgh" (OuterVolumeSpecName: "kube-api-access-8hbgh") pod "eb0c2762-545e-4330-9b6e-bbce1f5842e7" (UID: "eb0c2762-545e-4330-9b6e-bbce1f5842e7"). InnerVolumeSpecName "kube-api-access-8hbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.182291 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb0c2762-545e-4330-9b6e-bbce1f5842e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb0c2762-545e-4330-9b6e-bbce1f5842e7" (UID: "eb0c2762-545e-4330-9b6e-bbce1f5842e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.231393 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274034 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-config\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274099 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e394d94b-bf55-48b1-a200-b39ced56d1ad-serving-cert\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274155 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8wf\" (UniqueName: \"kubernetes.io/projected/e394d94b-bf55-48b1-a200-b39ced56d1ad-kube-api-access-hh8wf\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274210 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-client-ca\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274274 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb0c2762-545e-4330-9b6e-bbce1f5842e7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274401 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hbgh\" (UniqueName: \"kubernetes.io/projected/eb0c2762-545e-4330-9b6e-bbce1f5842e7-kube-api-access-8hbgh\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.274446 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb0c2762-545e-4330-9b6e-bbce1f5842e7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375450 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb117b4-fce3-47a0-ab91-243ebbbfe764-serving-cert\") pod \"afb117b4-fce3-47a0-ab91-243ebbbfe764\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375574 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-config\") pod \"afb117b4-fce3-47a0-ab91-243ebbbfe764\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375615 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-proxy-ca-bundles\") pod \"afb117b4-fce3-47a0-ab91-243ebbbfe764\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375671 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-client-ca\") pod \"afb117b4-fce3-47a0-ab91-243ebbbfe764\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375723 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtq6k\" (UniqueName: \"kubernetes.io/projected/afb117b4-fce3-47a0-ab91-243ebbbfe764-kube-api-access-qtq6k\") pod \"afb117b4-fce3-47a0-ab91-243ebbbfe764\" (UID: \"afb117b4-fce3-47a0-ab91-243ebbbfe764\") " Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375838 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e394d94b-bf55-48b1-a200-b39ced56d1ad-serving-cert\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375885 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8wf\" (UniqueName: \"kubernetes.io/projected/e394d94b-bf55-48b1-a200-b39ced56d1ad-kube-api-access-hh8wf\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375912 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-client-ca\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.375948 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-config\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.376643 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-client-ca" (OuterVolumeSpecName: "client-ca") pod "afb117b4-fce3-47a0-ab91-243ebbbfe764" (UID: "afb117b4-fce3-47a0-ab91-243ebbbfe764"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.376673 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "afb117b4-fce3-47a0-ab91-243ebbbfe764" (UID: "afb117b4-fce3-47a0-ab91-243ebbbfe764"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.377156 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-client-ca\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.377171 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-config\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.377790 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-config" (OuterVolumeSpecName: "config") pod "afb117b4-fce3-47a0-ab91-243ebbbfe764" (UID: "afb117b4-fce3-47a0-ab91-243ebbbfe764"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.378632 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afb117b4-fce3-47a0-ab91-243ebbbfe764-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afb117b4-fce3-47a0-ab91-243ebbbfe764" (UID: "afb117b4-fce3-47a0-ab91-243ebbbfe764"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.379237 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e394d94b-bf55-48b1-a200-b39ced56d1ad-serving-cert\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.379694 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afb117b4-fce3-47a0-ab91-243ebbbfe764-kube-api-access-qtq6k" (OuterVolumeSpecName: "kube-api-access-qtq6k") pod "afb117b4-fce3-47a0-ab91-243ebbbfe764" (UID: "afb117b4-fce3-47a0-ab91-243ebbbfe764"). InnerVolumeSpecName "kube-api-access-qtq6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.392404 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8wf\" (UniqueName: \"kubernetes.io/projected/e394d94b-bf55-48b1-a200-b39ced56d1ad-kube-api-access-hh8wf\") pod \"route-controller-manager-98998769b-bwq7b\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.475436 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.479872 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.479901 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtq6k\" (UniqueName: \"kubernetes.io/projected/afb117b4-fce3-47a0-ab91-243ebbbfe764-kube-api-access-qtq6k\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.479911 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afb117b4-fce3-47a0-ab91-243ebbbfe764-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.479922 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.479933 4924 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/afb117b4-fce3-47a0-ab91-243ebbbfe764-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:07 crc kubenswrapper[4924]: I1211 14:00:07.705088 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b"] Dec 11 14:00:07 crc kubenswrapper[4924]: W1211 14:00:07.712388 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode394d94b_bf55_48b1_a200_b39ced56d1ad.slice/crio-36a26605812140de9683c206c4af7953d57ca898a7487398ccade16500f93ce5 WatchSource:0}: Error finding container 36a26605812140de9683c206c4af7953d57ca898a7487398ccade16500f93ce5: Status 404 returned error can't find the container with id 36a26605812140de9683c206c4af7953d57ca898a7487398ccade16500f93ce5 Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.117593 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" event={"ID":"e394d94b-bf55-48b1-a200-b39ced56d1ad","Type":"ContainerStarted","Data":"36a26605812140de9683c206c4af7953d57ca898a7487398ccade16500f93ce5"} Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.119162 4924 generic.go:334] "Generic (PLEG): container finished" podID="3fbbd1ea-1176-4561-b12f-9c3bad018c0a" containerID="286eaba75ee49527475a8e2c1a70319fb7dc1c5db7d8176da1dd0db06d8978d8" exitCode=0 Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.119232 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" event={"ID":"3fbbd1ea-1176-4561-b12f-9c3bad018c0a","Type":"ContainerDied","Data":"286eaba75ee49527475a8e2c1a70319fb7dc1c5db7d8176da1dd0db06d8978d8"} Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.121712 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.121728 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688787759f-tlct9" event={"ID":"afb117b4-fce3-47a0-ab91-243ebbbfe764","Type":"ContainerDied","Data":"4e9db0a7958040435c0b90ed90f7c1a1f88a66c4a366c486d8e365c3606c8704"} Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.121782 4924 scope.go:117] "RemoveContainer" containerID="f80ab2a30fbd6dc4c3282e5d445182cf5a743a5cde5a71825ea518a4361045d7" Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.121712 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp" Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.154596 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-688787759f-tlct9"] Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.159297 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-688787759f-tlct9"] Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.166831 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp"] Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.169809 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-578744fd49-hjjcp"] Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.789188 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afb117b4-fce3-47a0-ab91-243ebbbfe764" path="/var/lib/kubelet/pods/afb117b4-fce3-47a0-ab91-243ebbbfe764/volumes" Dec 11 14:00:08 crc kubenswrapper[4924]: I1211 14:00:08.789816 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0c2762-545e-4330-9b6e-bbce1f5842e7" path="/var/lib/kubelet/pods/eb0c2762-545e-4330-9b6e-bbce1f5842e7/volumes" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.130139 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" event={"ID":"e394d94b-bf55-48b1-a200-b39ced56d1ad","Type":"ContainerStarted","Data":"72d56194b3ed902d93abec7813aafd4a82c63f9bc56b6b85eab9dac2e122db7e"} Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.131469 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.134736 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.149480 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" podStartSLOduration=9.149457629 podStartE2EDuration="9.149457629s" podCreationTimestamp="2025-12-11 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:00:09.146460582 +0000 UTC m=+422.655941569" watchObservedRunningTime="2025-12-11 14:00:09.149457629 +0000 UTC m=+422.658938616" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.386080 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.523681 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-secret-volume\") pod \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.523751 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrdvw\" (UniqueName: \"kubernetes.io/projected/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-kube-api-access-jrdvw\") pod \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.523829 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-config-volume\") pod \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\" (UID: \"3fbbd1ea-1176-4561-b12f-9c3bad018c0a\") " Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.524471 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "3fbbd1ea-1176-4561-b12f-9c3bad018c0a" (UID: "3fbbd1ea-1176-4561-b12f-9c3bad018c0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.528478 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3fbbd1ea-1176-4561-b12f-9c3bad018c0a" (UID: "3fbbd1ea-1176-4561-b12f-9c3bad018c0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.528717 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-kube-api-access-jrdvw" (OuterVolumeSpecName: "kube-api-access-jrdvw") pod "3fbbd1ea-1176-4561-b12f-9c3bad018c0a" (UID: "3fbbd1ea-1176-4561-b12f-9c3bad018c0a"). InnerVolumeSpecName "kube-api-access-jrdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.625196 4924 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.625252 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrdvw\" (UniqueName: \"kubernetes.io/projected/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-kube-api-access-jrdvw\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.625264 4924 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fbbd1ea-1176-4561-b12f-9c3bad018c0a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.907382 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67db58fbdc-xmtcg"] Dec 11 14:00:09 crc kubenswrapper[4924]: E1211 14:00:09.907801 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fbbd1ea-1176-4561-b12f-9c3bad018c0a" containerName="collect-profiles" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.907903 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fbbd1ea-1176-4561-b12f-9c3bad018c0a" containerName="collect-profiles" Dec 11 14:00:09 crc kubenswrapper[4924]: E1211 14:00:09.907967 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afb117b4-fce3-47a0-ab91-243ebbbfe764" containerName="controller-manager" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.908028 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="afb117b4-fce3-47a0-ab91-243ebbbfe764" containerName="controller-manager" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.908189 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="afb117b4-fce3-47a0-ab91-243ebbbfe764" containerName="controller-manager" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.908278 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fbbd1ea-1176-4561-b12f-9c3bad018c0a" containerName="collect-profiles" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.908885 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.912042 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.913347 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.913769 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.913934 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.914047 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.914074 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.920078 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db58fbdc-xmtcg"] Dec 11 14:00:09 crc kubenswrapper[4924]: I1211 14:00:09.920741 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.033928 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-client-ca\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.033987 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76nbt\" (UniqueName: \"kubernetes.io/projected/ee7b9cff-cc14-4c63-b35e-860c69e13954-kube-api-access-76nbt\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.034024 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-config\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.034251 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee7b9cff-cc14-4c63-b35e-860c69e13954-serving-cert\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.034455 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-proxy-ca-bundles\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.136010 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-config\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.136071 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee7b9cff-cc14-4c63-b35e-860c69e13954-serving-cert\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.136098 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-proxy-ca-bundles\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.136154 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-client-ca\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.136185 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76nbt\" (UniqueName: \"kubernetes.io/projected/ee7b9cff-cc14-4c63-b35e-860c69e13954-kube-api-access-76nbt\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.137475 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.137665 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-proxy-ca-bundles\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.137633 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-5bg87" event={"ID":"3fbbd1ea-1176-4561-b12f-9c3bad018c0a","Type":"ContainerDied","Data":"9f32379e85960c34092b415ca70cd81cb47689fef837f8703316c0b64a78489a"} Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.137678 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-config\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.137765 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f32379e85960c34092b415ca70cd81cb47689fef837f8703316c0b64a78489a" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.138241 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee7b9cff-cc14-4c63-b35e-860c69e13954-client-ca\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.142596 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee7b9cff-cc14-4c63-b35e-860c69e13954-serving-cert\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.152615 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76nbt\" (UniqueName: \"kubernetes.io/projected/ee7b9cff-cc14-4c63-b35e-860c69e13954-kube-api-access-76nbt\") pod \"controller-manager-67db58fbdc-xmtcg\" (UID: \"ee7b9cff-cc14-4c63-b35e-860c69e13954\") " pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.248311 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:10 crc kubenswrapper[4924]: I1211 14:00:10.429677 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db58fbdc-xmtcg"] Dec 11 14:00:11 crc kubenswrapper[4924]: I1211 14:00:11.145677 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" event={"ID":"ee7b9cff-cc14-4c63-b35e-860c69e13954","Type":"ContainerStarted","Data":"c8bbf8a6870c7a792baa5ff2a7929ee80eb6d5ea8e7a27f3fb3657414c34c605"} Dec 11 14:00:11 crc kubenswrapper[4924]: I1211 14:00:11.146050 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" event={"ID":"ee7b9cff-cc14-4c63-b35e-860c69e13954","Type":"ContainerStarted","Data":"6e591a99057e60fc416eda4b7e4ff3ad8005659e8fbfd81dd450f2b8248d72c7"} Dec 11 14:00:12 crc kubenswrapper[4924]: I1211 14:00:12.153039 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:12 crc kubenswrapper[4924]: I1211 14:00:12.158730 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" Dec 11 14:00:12 crc kubenswrapper[4924]: I1211 14:00:12.174459 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67db58fbdc-xmtcg" podStartSLOduration=12.174440982 podStartE2EDuration="12.174440982s" podCreationTimestamp="2025-12-11 14:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:00:12.171089075 +0000 UTC m=+425.680570062" watchObservedRunningTime="2025-12-11 14:00:12.174440982 +0000 UTC m=+425.683921959" Dec 11 14:00:15 crc kubenswrapper[4924]: I1211 14:00:15.432855 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:00:15 crc kubenswrapper[4924]: I1211 14:00:15.433167 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:00:15 crc kubenswrapper[4924]: I1211 14:00:15.433285 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:00:15 crc kubenswrapper[4924]: I1211 14:00:15.433954 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c79fce7fa0c1a857b32a9d68eaa5e8584a74fcf871adf90d33f6d45436b5aac8"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:00:15 crc kubenswrapper[4924]: I1211 14:00:15.434011 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://c79fce7fa0c1a857b32a9d68eaa5e8584a74fcf871adf90d33f6d45436b5aac8" gracePeriod=600 Dec 11 14:00:16 crc kubenswrapper[4924]: I1211 14:00:16.176739 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="c79fce7fa0c1a857b32a9d68eaa5e8584a74fcf871adf90d33f6d45436b5aac8" exitCode=0 Dec 11 14:00:16 crc kubenswrapper[4924]: I1211 14:00:16.176799 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"c79fce7fa0c1a857b32a9d68eaa5e8584a74fcf871adf90d33f6d45436b5aac8"} Dec 11 14:00:16 crc kubenswrapper[4924]: I1211 14:00:16.177370 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"405606bd5064b5b5b954e418b973de9eb6cb19ea385b134343634f3149f51d7f"} Dec 11 14:00:16 crc kubenswrapper[4924]: I1211 14:00:16.177411 4924 scope.go:117] "RemoveContainer" containerID="eaf603e5a347993f850eff6e1aedf330b90ebb215de4d3bc6594d0660f9e0543" Dec 11 14:00:19 crc kubenswrapper[4924]: I1211 14:00:19.169446 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfbdv"] Dec 11 14:00:19 crc kubenswrapper[4924]: I1211 14:00:19.170077 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mfbdv" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="registry-server" containerID="cri-o://345db10d571dd29944e141c350767597db30d5854144839dee2c77ce7856f8e6" gracePeriod=2 Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.205657 4924 generic.go:334] "Generic (PLEG): container finished" podID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerID="345db10d571dd29944e141c350767597db30d5854144839dee2c77ce7856f8e6" exitCode=0 Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.205723 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerDied","Data":"345db10d571dd29944e141c350767597db30d5854144839dee2c77ce7856f8e6"} Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.335065 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.477697 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64mzc\" (UniqueName: \"kubernetes.io/projected/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-kube-api-access-64mzc\") pod \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.477819 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-catalog-content\") pod \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.477972 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-utilities\") pod \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\" (UID: \"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d\") " Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.479207 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-utilities" (OuterVolumeSpecName: "utilities") pod "e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" (UID: "e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.484515 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-kube-api-access-64mzc" (OuterVolumeSpecName: "kube-api-access-64mzc") pod "e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" (UID: "e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d"). InnerVolumeSpecName "kube-api-access-64mzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.579394 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.579431 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64mzc\" (UniqueName: \"kubernetes.io/projected/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-kube-api-access-64mzc\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.596544 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" (UID: "e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:20 crc kubenswrapper[4924]: I1211 14:00:20.680584 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.212721 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mfbdv" event={"ID":"e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d","Type":"ContainerDied","Data":"fc74e4e26828d01c42180dd22f4fb74cf25eae8e698c332f10c78384f68483b2"} Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.212784 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mfbdv" Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.213073 4924 scope.go:117] "RemoveContainer" containerID="345db10d571dd29944e141c350767597db30d5854144839dee2c77ce7856f8e6" Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.229886 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mfbdv"] Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.230164 4924 scope.go:117] "RemoveContainer" containerID="3b276879bb1ba21811a9b6d1e41a5194942b7ac347bc2441f7b55a5d55e82d6f" Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.235228 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mfbdv"] Dec 11 14:00:21 crc kubenswrapper[4924]: I1211 14:00:21.250203 4924 scope.go:117] "RemoveContainer" containerID="baeb1110fa8eae3faf4052d6ff507732787f376867f57f7b358d91867dcb5480" Dec 11 14:00:22 crc kubenswrapper[4924]: I1211 14:00:22.788903 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" path="/var/lib/kubelet/pods/e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d/volumes" Dec 11 14:00:50 crc kubenswrapper[4924]: I1211 14:00:50.423625 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b"] Dec 11 14:00:50 crc kubenswrapper[4924]: I1211 14:00:50.424476 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" podUID="e394d94b-bf55-48b1-a200-b39ced56d1ad" containerName="route-controller-manager" containerID="cri-o://72d56194b3ed902d93abec7813aafd4a82c63f9bc56b6b85eab9dac2e122db7e" gracePeriod=30 Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.363344 4924 generic.go:334] "Generic (PLEG): container finished" podID="e394d94b-bf55-48b1-a200-b39ced56d1ad" containerID="72d56194b3ed902d93abec7813aafd4a82c63f9bc56b6b85eab9dac2e122db7e" exitCode=0 Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.363473 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" event={"ID":"e394d94b-bf55-48b1-a200-b39ced56d1ad","Type":"ContainerDied","Data":"72d56194b3ed902d93abec7813aafd4a82c63f9bc56b6b85eab9dac2e122db7e"} Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.507848 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.544938 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz"] Dec 11 14:00:51 crc kubenswrapper[4924]: E1211 14:00:51.545187 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e394d94b-bf55-48b1-a200-b39ced56d1ad" containerName="route-controller-manager" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545207 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e394d94b-bf55-48b1-a200-b39ced56d1ad" containerName="route-controller-manager" Dec 11 14:00:51 crc kubenswrapper[4924]: E1211 14:00:51.545225 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="extract-utilities" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545233 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="extract-utilities" Dec 11 14:00:51 crc kubenswrapper[4924]: E1211 14:00:51.545259 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="registry-server" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545267 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="registry-server" Dec 11 14:00:51 crc kubenswrapper[4924]: E1211 14:00:51.545280 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="extract-content" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545287 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="extract-content" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545405 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8c21ddd-d3c9-42cc-ab22-169a7f8bf06d" containerName="registry-server" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545418 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="e394d94b-bf55-48b1-a200-b39ced56d1ad" containerName="route-controller-manager" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.545800 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.558841 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz"] Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.672575 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-config\") pod \"e394d94b-bf55-48b1-a200-b39ced56d1ad\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.672639 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-client-ca\") pod \"e394d94b-bf55-48b1-a200-b39ced56d1ad\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.672679 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e394d94b-bf55-48b1-a200-b39ced56d1ad-serving-cert\") pod \"e394d94b-bf55-48b1-a200-b39ced56d1ad\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.672748 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh8wf\" (UniqueName: \"kubernetes.io/projected/e394d94b-bf55-48b1-a200-b39ced56d1ad-kube-api-access-hh8wf\") pod \"e394d94b-bf55-48b1-a200-b39ced56d1ad\" (UID: \"e394d94b-bf55-48b1-a200-b39ced56d1ad\") " Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.672915 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzmr\" (UniqueName: \"kubernetes.io/projected/7e0b0735-ad2f-40d9-b314-6705481656d2-kube-api-access-pwzmr\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.672996 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0b0735-ad2f-40d9-b314-6705481656d2-serving-cert\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.673014 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0b0735-ad2f-40d9-b314-6705481656d2-config\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.673070 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0b0735-ad2f-40d9-b314-6705481656d2-client-ca\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.673838 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "e394d94b-bf55-48b1-a200-b39ced56d1ad" (UID: "e394d94b-bf55-48b1-a200-b39ced56d1ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.673875 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-config" (OuterVolumeSpecName: "config") pod "e394d94b-bf55-48b1-a200-b39ced56d1ad" (UID: "e394d94b-bf55-48b1-a200-b39ced56d1ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.684479 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e394d94b-bf55-48b1-a200-b39ced56d1ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e394d94b-bf55-48b1-a200-b39ced56d1ad" (UID: "e394d94b-bf55-48b1-a200-b39ced56d1ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.684512 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e394d94b-bf55-48b1-a200-b39ced56d1ad-kube-api-access-hh8wf" (OuterVolumeSpecName: "kube-api-access-hh8wf") pod "e394d94b-bf55-48b1-a200-b39ced56d1ad" (UID: "e394d94b-bf55-48b1-a200-b39ced56d1ad"). InnerVolumeSpecName "kube-api-access-hh8wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774391 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwzmr\" (UniqueName: \"kubernetes.io/projected/7e0b0735-ad2f-40d9-b314-6705481656d2-kube-api-access-pwzmr\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774471 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0b0735-ad2f-40d9-b314-6705481656d2-serving-cert\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774492 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0b0735-ad2f-40d9-b314-6705481656d2-config\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774536 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0b0735-ad2f-40d9-b314-6705481656d2-client-ca\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774596 4924 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774609 4924 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e394d94b-bf55-48b1-a200-b39ced56d1ad-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774620 4924 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e394d94b-bf55-48b1-a200-b39ced56d1ad-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.774635 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh8wf\" (UniqueName: \"kubernetes.io/projected/e394d94b-bf55-48b1-a200-b39ced56d1ad-kube-api-access-hh8wf\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.775749 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e0b0735-ad2f-40d9-b314-6705481656d2-client-ca\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.775964 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0b0735-ad2f-40d9-b314-6705481656d2-config\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.780142 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e0b0735-ad2f-40d9-b314-6705481656d2-serving-cert\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.792249 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwzmr\" (UniqueName: \"kubernetes.io/projected/7e0b0735-ad2f-40d9-b314-6705481656d2-kube-api-access-pwzmr\") pod \"route-controller-manager-765cf48c6f-5h8vz\" (UID: \"7e0b0735-ad2f-40d9-b314-6705481656d2\") " pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:51 crc kubenswrapper[4924]: I1211 14:00:51.865302 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.280116 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz"] Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.375611 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.375590 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b" event={"ID":"e394d94b-bf55-48b1-a200-b39ced56d1ad","Type":"ContainerDied","Data":"36a26605812140de9683c206c4af7953d57ca898a7487398ccade16500f93ce5"} Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.375779 4924 scope.go:117] "RemoveContainer" containerID="72d56194b3ed902d93abec7813aafd4a82c63f9bc56b6b85eab9dac2e122db7e" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.378942 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" event={"ID":"7e0b0735-ad2f-40d9-b314-6705481656d2","Type":"ContainerStarted","Data":"d43b6a779e2d04947b14cad71b620fe2db54c0bb711aa24edd14d1fe798b772f"} Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.412754 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b"] Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.416085 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-98998769b-bwq7b"] Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.706808 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zmlx7"] Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.707516 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.730800 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zmlx7"] Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.790311 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e394d94b-bf55-48b1-a200-b39ced56d1ad" path="/var/lib/kubelet/pods/e394d94b-bf55-48b1-a200-b39ced56d1ad/volumes" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792258 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-registry-tls\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792318 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc23085c-1725-42f2-9ec3-353a15cff63f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792366 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc23085c-1725-42f2-9ec3-353a15cff63f-trusted-ca\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792384 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6v79\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-kube-api-access-l6v79\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792403 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc23085c-1725-42f2-9ec3-353a15cff63f-registry-certificates\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792459 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792510 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-bound-sa-token\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.792557 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc23085c-1725-42f2-9ec3-353a15cff63f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.823500 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893623 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc23085c-1725-42f2-9ec3-353a15cff63f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893694 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc23085c-1725-42f2-9ec3-353a15cff63f-trusted-ca\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893719 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6v79\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-kube-api-access-l6v79\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893745 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc23085c-1725-42f2-9ec3-353a15cff63f-registry-certificates\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893772 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-bound-sa-token\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893808 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc23085c-1725-42f2-9ec3-353a15cff63f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.893828 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-registry-tls\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.894229 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dc23085c-1725-42f2-9ec3-353a15cff63f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.894958 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc23085c-1725-42f2-9ec3-353a15cff63f-trusted-ca\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.895036 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dc23085c-1725-42f2-9ec3-353a15cff63f-registry-certificates\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.899375 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dc23085c-1725-42f2-9ec3-353a15cff63f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.900194 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-registry-tls\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.917776 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-bound-sa-token\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:52 crc kubenswrapper[4924]: I1211 14:00:52.921903 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6v79\" (UniqueName: \"kubernetes.io/projected/dc23085c-1725-42f2-9ec3-353a15cff63f-kube-api-access-l6v79\") pod \"image-registry-66df7c8f76-zmlx7\" (UID: \"dc23085c-1725-42f2-9ec3-353a15cff63f\") " pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:53 crc kubenswrapper[4924]: I1211 14:00:53.030235 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:00:53 crc kubenswrapper[4924]: I1211 14:00:53.387543 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" event={"ID":"7e0b0735-ad2f-40d9-b314-6705481656d2","Type":"ContainerStarted","Data":"987752c3a911e12d9cce0e09ad4b03e7d88034a095d53460e949f314ae637f88"} Dec 11 14:00:53 crc kubenswrapper[4924]: I1211 14:00:53.387896 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:53 crc kubenswrapper[4924]: I1211 14:00:53.392685 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" Dec 11 14:00:53 crc kubenswrapper[4924]: I1211 14:00:53.408400 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-765cf48c6f-5h8vz" podStartSLOduration=3.40838497 podStartE2EDuration="3.40838497s" podCreationTimestamp="2025-12-11 14:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:00:53.404276115 +0000 UTC m=+466.913757092" watchObservedRunningTime="2025-12-11 14:00:53.40838497 +0000 UTC m=+466.917865947" Dec 11 14:00:53 crc kubenswrapper[4924]: I1211 14:00:53.425452 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-zmlx7"] Dec 11 14:00:54 crc kubenswrapper[4924]: I1211 14:00:54.399065 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" event={"ID":"dc23085c-1725-42f2-9ec3-353a15cff63f","Type":"ContainerStarted","Data":"dfe419313e5218a26f10d3acc958a0858bcb9cc6a3570ee13c42df1478eb1fd3"} Dec 11 14:00:54 crc kubenswrapper[4924]: I1211 14:00:54.399435 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" event={"ID":"dc23085c-1725-42f2-9ec3-353a15cff63f","Type":"ContainerStarted","Data":"6f1dbfddcebd675939531efe4ed490138f95ea1394d0eb62e0682681594c59c8"} Dec 11 14:00:54 crc kubenswrapper[4924]: I1211 14:00:54.419273 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" podStartSLOduration=2.419249767 podStartE2EDuration="2.419249767s" podCreationTimestamp="2025-12-11 14:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:00:54.415305116 +0000 UTC m=+467.924786103" watchObservedRunningTime="2025-12-11 14:00:54.419249767 +0000 UTC m=+467.928730744" Dec 11 14:00:55 crc kubenswrapper[4924]: I1211 14:00:55.405582 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.151505 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hmslv"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.152238 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hmslv" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="registry-server" containerID="cri-o://9313af13a47cdbbe46f5fc2277a24f0534ef7f99dd1d145e259498e9baa3b95a" gracePeriod=30 Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.155678 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkv96"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.155894 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkv96" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="registry-server" containerID="cri-o://4cece45d59e8d48210e1c583dfe8073343ceae265067c5cbb1bf6f4666c7418c" gracePeriod=30 Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.165058 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dvnc9"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.165305 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" containerID="cri-o://25f788c079877304957f6db88ac395b470d3f7b4b9545b0e6ab9ae182a732faf" gracePeriod=30 Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.172255 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvkwf"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.172599 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fvkwf" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="registry-server" containerID="cri-o://6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4" gracePeriod=30 Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.194626 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jwzqd"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.195477 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.198651 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xvm8"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.198821 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xvm8" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="registry-server" containerID="cri-o://227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2" gracePeriod=30 Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.202562 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jwzqd"] Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.233087 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3fcde33-0260-4abe-a246-3606d271519c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.233139 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3fcde33-0260-4abe-a246-3606d271519c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.233169 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwn2c\" (UniqueName: \"kubernetes.io/projected/a3fcde33-0260-4abe-a246-3606d271519c-kube-api-access-dwn2c\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.334434 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3fcde33-0260-4abe-a246-3606d271519c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.334504 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3fcde33-0260-4abe-a246-3606d271519c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.334548 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwn2c\" (UniqueName: \"kubernetes.io/projected/a3fcde33-0260-4abe-a246-3606d271519c-kube-api-access-dwn2c\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.336428 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a3fcde33-0260-4abe-a246-3606d271519c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.342589 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a3fcde33-0260-4abe-a246-3606d271519c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.361623 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwn2c\" (UniqueName: \"kubernetes.io/projected/a3fcde33-0260-4abe-a246-3606d271519c-kube-api-access-dwn2c\") pod \"marketplace-operator-79b997595-jwzqd\" (UID: \"a3fcde33-0260-4abe-a246-3606d271519c\") " pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: E1211 14:01:03.452033 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4 is running failed: container process not found" containerID="6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:01:03 crc kubenswrapper[4924]: E1211 14:01:03.452423 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4 is running failed: container process not found" containerID="6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:01:03 crc kubenswrapper[4924]: E1211 14:01:03.452675 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4 is running failed: container process not found" containerID="6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:01:03 crc kubenswrapper[4924]: E1211 14:01:03.452706 4924 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-fvkwf" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="registry-server" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.522111 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:03 crc kubenswrapper[4924]: I1211 14:01:03.940460 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jwzqd"] Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.474235 4924 generic.go:334] "Generic (PLEG): container finished" podID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerID="4cece45d59e8d48210e1c583dfe8073343ceae265067c5cbb1bf6f4666c7418c" exitCode=0 Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.474337 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkv96" event={"ID":"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1","Type":"ContainerDied","Data":"4cece45d59e8d48210e1c583dfe8073343ceae265067c5cbb1bf6f4666c7418c"} Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.478235 4924 generic.go:334] "Generic (PLEG): container finished" podID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerID="9313af13a47cdbbe46f5fc2277a24f0534ef7f99dd1d145e259498e9baa3b95a" exitCode=0 Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.478319 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmslv" event={"ID":"4b178dc2-db02-45b7-a589-b1e71d29c50e","Type":"ContainerDied","Data":"9313af13a47cdbbe46f5fc2277a24f0534ef7f99dd1d145e259498e9baa3b95a"} Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.480621 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dvnc9_6576a4b8-18f3-4084-ae2e-7564ac2f59a1/marketplace-operator/1.log" Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.480665 4924 generic.go:334] "Generic (PLEG): container finished" podID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerID="25f788c079877304957f6db88ac395b470d3f7b4b9545b0e6ab9ae182a732faf" exitCode=0 Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.480727 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerDied","Data":"25f788c079877304957f6db88ac395b470d3f7b4b9545b0e6ab9ae182a732faf"} Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.480780 4924 scope.go:117] "RemoveContainer" containerID="8e7b4622db8cb74b0b2d8f6ff19e27d7069ee727f28079379a9548d363c53e48" Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.482153 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" event={"ID":"a3fcde33-0260-4abe-a246-3606d271519c","Type":"ContainerStarted","Data":"10652e03d88e797e0024b272d4dbd34a05bdaa618895767d944c71164284fcb7"} Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.490376 4924 generic.go:334] "Generic (PLEG): container finished" podID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerID="6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4" exitCode=0 Dec 11 14:01:04 crc kubenswrapper[4924]: I1211 14:01:04.490411 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvkwf" event={"ID":"f678bb40-07bb-4ae9-a317-4d06821f518a","Type":"ContainerDied","Data":"6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4"} Dec 11 14:01:04 crc kubenswrapper[4924]: E1211 14:01:04.833501 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2 is running failed: container process not found" containerID="227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:01:04 crc kubenswrapper[4924]: E1211 14:01:04.834529 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2 is running failed: container process not found" containerID="227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:01:04 crc kubenswrapper[4924]: E1211 14:01:04.835068 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2 is running failed: container process not found" containerID="227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:01:04 crc kubenswrapper[4924]: E1211 14:01:04.835180 4924 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-6xvm8" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="registry-server" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.497489 4924 generic.go:334] "Generic (PLEG): container finished" podID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerID="227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2" exitCode=0 Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.497772 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerDied","Data":"227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2"} Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.500411 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fvkwf" event={"ID":"f678bb40-07bb-4ae9-a317-4d06821f518a","Type":"ContainerDied","Data":"4dda7734f2f90c1d39823488289e87a7b3e2deccd5c8d8a6e8e92911b19deabc"} Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.500456 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dda7734f2f90c1d39823488289e87a7b3e2deccd5c8d8a6e8e92911b19deabc" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.514821 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.565905 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txg9z\" (UniqueName: \"kubernetes.io/projected/f678bb40-07bb-4ae9-a317-4d06821f518a-kube-api-access-txg9z\") pod \"f678bb40-07bb-4ae9-a317-4d06821f518a\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.565955 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-utilities\") pod \"f678bb40-07bb-4ae9-a317-4d06821f518a\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.566026 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-catalog-content\") pod \"f678bb40-07bb-4ae9-a317-4d06821f518a\" (UID: \"f678bb40-07bb-4ae9-a317-4d06821f518a\") " Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.568473 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-utilities" (OuterVolumeSpecName: "utilities") pod "f678bb40-07bb-4ae9-a317-4d06821f518a" (UID: "f678bb40-07bb-4ae9-a317-4d06821f518a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.571985 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f678bb40-07bb-4ae9-a317-4d06821f518a-kube-api-access-txg9z" (OuterVolumeSpecName: "kube-api-access-txg9z") pod "f678bb40-07bb-4ae9-a317-4d06821f518a" (UID: "f678bb40-07bb-4ae9-a317-4d06821f518a"). InnerVolumeSpecName "kube-api-access-txg9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.590667 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f678bb40-07bb-4ae9-a317-4d06821f518a" (UID: "f678bb40-07bb-4ae9-a317-4d06821f518a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.667203 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.667243 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txg9z\" (UniqueName: \"kubernetes.io/projected/f678bb40-07bb-4ae9-a317-4d06821f518a-kube-api-access-txg9z\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:05 crc kubenswrapper[4924]: I1211 14:01:05.667255 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f678bb40-07bb-4ae9-a317-4d06821f518a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:06 crc kubenswrapper[4924]: I1211 14:01:06.508090 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" event={"ID":"a3fcde33-0260-4abe-a246-3606d271519c","Type":"ContainerStarted","Data":"eda3378738214f73945b8b23b9d075bd255bb4524a693a682708d52cc51fb691"} Dec 11 14:01:06 crc kubenswrapper[4924]: I1211 14:01:06.508130 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fvkwf" Dec 11 14:01:06 crc kubenswrapper[4924]: I1211 14:01:06.542323 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvkwf"] Dec 11 14:01:06 crc kubenswrapper[4924]: I1211 14:01:06.546075 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fvkwf"] Dec 11 14:01:06 crc kubenswrapper[4924]: I1211 14:01:06.797659 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" path="/var/lib/kubelet/pods/f678bb40-07bb-4ae9-a317-4d06821f518a/volumes" Dec 11 14:01:06 crc kubenswrapper[4924]: I1211 14:01:06.992233 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.081012 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q7wn\" (UniqueName: \"kubernetes.io/projected/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-kube-api-access-8q7wn\") pod \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.081047 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-trusted-ca\") pod \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.081077 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-operator-metrics\") pod \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\" (UID: \"6576a4b8-18f3-4084-ae2e-7564ac2f59a1\") " Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.081780 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "6576a4b8-18f3-4084-ae2e-7564ac2f59a1" (UID: "6576a4b8-18f3-4084-ae2e-7564ac2f59a1"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.082066 4924 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.086118 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "6576a4b8-18f3-4084-ae2e-7564ac2f59a1" (UID: "6576a4b8-18f3-4084-ae2e-7564ac2f59a1"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.087183 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-kube-api-access-8q7wn" (OuterVolumeSpecName: "kube-api-access-8q7wn") pod "6576a4b8-18f3-4084-ae2e-7564ac2f59a1" (UID: "6576a4b8-18f3-4084-ae2e-7564ac2f59a1"). InnerVolumeSpecName "kube-api-access-8q7wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.184454 4924 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.184533 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q7wn\" (UniqueName: \"kubernetes.io/projected/6576a4b8-18f3-4084-ae2e-7564ac2f59a1-kube-api-access-8q7wn\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.515442 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" event={"ID":"6576a4b8-18f3-4084-ae2e-7564ac2f59a1","Type":"ContainerDied","Data":"ef964f25e06922d014e60cf4927b8cfeb8a7283f341c368d438a985acd5b3d2a"} Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.515495 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.515811 4924 scope.go:117] "RemoveContainer" containerID="25f788c079877304957f6db88ac395b470d3f7b4b9545b0e6ab9ae182a732faf" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.550493 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dvnc9"] Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.558145 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dvnc9"] Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.563953 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wkhr4"] Dec 11 14:01:07 crc kubenswrapper[4924]: E1211 14:01:07.564468 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.564553 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: E1211 14:01:07.564614 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="extract-content" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.564681 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="extract-content" Dec 11 14:01:07 crc kubenswrapper[4924]: E1211 14:01:07.564743 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.564797 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: E1211 14:01:07.564865 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="extract-utilities" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.564919 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="extract-utilities" Dec 11 14:01:07 crc kubenswrapper[4924]: E1211 14:01:07.564975 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="registry-server" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.565045 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="registry-server" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.565242 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.565320 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="f678bb40-07bb-4ae9-a317-4d06821f518a" containerName="registry-server" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.565459 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: E1211 14:01:07.565626 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.565690 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.565862 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.566612 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.568347 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkhr4"] Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.569267 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.593370 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-catalog-content\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.593424 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-utilities\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.593454 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2g4g\" (UniqueName: \"kubernetes.io/projected/e6a6ca7a-2612-4a09-9d44-a364568ef20e-kube-api-access-r2g4g\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.694464 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-utilities\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.694548 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2g4g\" (UniqueName: \"kubernetes.io/projected/e6a6ca7a-2612-4a09-9d44-a364568ef20e-kube-api-access-r2g4g\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.694673 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-catalog-content\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.695018 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-utilities\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.695116 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-catalog-content\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.723207 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2g4g\" (UniqueName: \"kubernetes.io/projected/e6a6ca7a-2612-4a09-9d44-a364568ef20e-kube-api-access-r2g4g\") pod \"redhat-marketplace-wkhr4\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.839527 4924 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dvnc9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.839858 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dvnc9" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:01:07 crc kubenswrapper[4924]: I1211 14:01:07.881737 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.289118 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.347997 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkhr4"] Dec 11 14:01:08 crc kubenswrapper[4924]: W1211 14:01:08.367018 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6a6ca7a_2612_4a09_9d44_a364568ef20e.slice/crio-fcfbcf57a238efcecf680aa049f2094e775f80d4c456294700cadbf237d32c8e WatchSource:0}: Error finding container fcfbcf57a238efcecf680aa049f2094e775f80d4c456294700cadbf237d32c8e: Status 404 returned error can't find the container with id fcfbcf57a238efcecf680aa049f2094e775f80d4c456294700cadbf237d32c8e Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.404564 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-catalog-content\") pod \"4b178dc2-db02-45b7-a589-b1e71d29c50e\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.404635 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjg2p\" (UniqueName: \"kubernetes.io/projected/4b178dc2-db02-45b7-a589-b1e71d29c50e-kube-api-access-rjg2p\") pod \"4b178dc2-db02-45b7-a589-b1e71d29c50e\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.404712 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-utilities\") pod \"4b178dc2-db02-45b7-a589-b1e71d29c50e\" (UID: \"4b178dc2-db02-45b7-a589-b1e71d29c50e\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.405768 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-utilities" (OuterVolumeSpecName: "utilities") pod "4b178dc2-db02-45b7-a589-b1e71d29c50e" (UID: "4b178dc2-db02-45b7-a589-b1e71d29c50e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.409882 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b178dc2-db02-45b7-a589-b1e71d29c50e-kube-api-access-rjg2p" (OuterVolumeSpecName: "kube-api-access-rjg2p") pod "4b178dc2-db02-45b7-a589-b1e71d29c50e" (UID: "4b178dc2-db02-45b7-a589-b1e71d29c50e"). InnerVolumeSpecName "kube-api-access-rjg2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.464626 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b178dc2-db02-45b7-a589-b1e71d29c50e" (UID: "4b178dc2-db02-45b7-a589-b1e71d29c50e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.507306 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.507382 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjg2p\" (UniqueName: \"kubernetes.io/projected/4b178dc2-db02-45b7-a589-b1e71d29c50e-kube-api-access-rjg2p\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.507403 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b178dc2-db02-45b7-a589-b1e71d29c50e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.525104 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerStarted","Data":"fcfbcf57a238efcecf680aa049f2094e775f80d4c456294700cadbf237d32c8e"} Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.556733 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmslv" event={"ID":"4b178dc2-db02-45b7-a589-b1e71d29c50e","Type":"ContainerDied","Data":"9a7fbce4568c8c3146955dfd5f13b6d55c849d1638203297ad75be54726df882"} Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.556874 4924 scope.go:117] "RemoveContainer" containerID="9313af13a47cdbbe46f5fc2277a24f0534ef7f99dd1d145e259498e9baa3b95a" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.557074 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmslv" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.578576 4924 scope.go:117] "RemoveContainer" containerID="13f8084adc66c1f1d107eee356a86e5cf5717e6fe13f86d074d1133fd0ef4229" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.598768 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hmslv"] Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.603302 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hmslv"] Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.608472 4924 scope.go:117] "RemoveContainer" containerID="4f08dbfae0e6c98ece06338dcbf839d5e0082fbd03b5563a125682811f3b6037" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.665339 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkv96" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.710358 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-catalog-content\") pod \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.710434 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2z6m\" (UniqueName: \"kubernetes.io/projected/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-kube-api-access-z2z6m\") pod \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.710491 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-utilities\") pod \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\" (UID: \"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.711525 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-utilities" (OuterVolumeSpecName: "utilities") pod "2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" (UID: "2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.739495 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-kube-api-access-z2z6m" (OuterVolumeSpecName: "kube-api-access-z2z6m") pod "2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" (UID: "2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1"). InnerVolumeSpecName "kube-api-access-z2z6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.768034 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" (UID: "2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.768804 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.809533 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" path="/var/lib/kubelet/pods/4b178dc2-db02-45b7-a589-b1e71d29c50e/volumes" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.810512 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6576a4b8-18f3-4084-ae2e-7564ac2f59a1" path="/var/lib/kubelet/pods/6576a4b8-18f3-4084-ae2e-7564ac2f59a1/volumes" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.812178 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-catalog-content\") pod \"bceef104-5373-46a2-b7d9-5cc5782449f6\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.812261 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb9pv\" (UniqueName: \"kubernetes.io/projected/bceef104-5373-46a2-b7d9-5cc5782449f6-kube-api-access-tb9pv\") pod \"bceef104-5373-46a2-b7d9-5cc5782449f6\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.812310 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-utilities\") pod \"bceef104-5373-46a2-b7d9-5cc5782449f6\" (UID: \"bceef104-5373-46a2-b7d9-5cc5782449f6\") " Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.812656 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2z6m\" (UniqueName: \"kubernetes.io/projected/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-kube-api-access-z2z6m\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.812679 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.812688 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.813861 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-utilities" (OuterVolumeSpecName: "utilities") pod "bceef104-5373-46a2-b7d9-5cc5782449f6" (UID: "bceef104-5373-46a2-b7d9-5cc5782449f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.819518 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bceef104-5373-46a2-b7d9-5cc5782449f6-kube-api-access-tb9pv" (OuterVolumeSpecName: "kube-api-access-tb9pv") pod "bceef104-5373-46a2-b7d9-5cc5782449f6" (UID: "bceef104-5373-46a2-b7d9-5cc5782449f6"). InnerVolumeSpecName "kube-api-access-tb9pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.913537 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb9pv\" (UniqueName: \"kubernetes.io/projected/bceef104-5373-46a2-b7d9-5cc5782449f6-kube-api-access-tb9pv\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.913569 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:08 crc kubenswrapper[4924]: I1211 14:01:08.930475 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bceef104-5373-46a2-b7d9-5cc5782449f6" (UID: "bceef104-5373-46a2-b7d9-5cc5782449f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.015121 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bceef104-5373-46a2-b7d9-5cc5782449f6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.563753 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xvm8" event={"ID":"bceef104-5373-46a2-b7d9-5cc5782449f6","Type":"ContainerDied","Data":"1f4921feeaf643c213428a38af3ca61358932420937b4273aba5c0b27b774013"} Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.563800 4924 scope.go:117] "RemoveContainer" containerID="227d62c5bdfd3aedcf6fc88e50d08f087d54712bb907ed0bbe2bdc443cc9a9f2" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.563915 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xvm8" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.570262 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkv96" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.570257 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkv96" event={"ID":"2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1","Type":"ContainerDied","Data":"bd1864157f70e7bc1ec7894083cc944d5a1bcbf982d5edaab56cdbd4dfa05527"} Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.570586 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.578242 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.595457 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkv96"] Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.600951 4924 scope.go:117] "RemoveContainer" containerID="d46eb7bc44782fb847fbae790ef167baaeb2666c326c0a4ba55f8fa1fe808ee1" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.601571 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkv96"] Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.610272 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jwzqd" podStartSLOduration=6.610252082 podStartE2EDuration="6.610252082s" podCreationTimestamp="2025-12-11 14:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:01:09.607926516 +0000 UTC m=+483.117407493" watchObservedRunningTime="2025-12-11 14:01:09.610252082 +0000 UTC m=+483.119733069" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.629782 4924 scope.go:117] "RemoveContainer" containerID="8924c5a0a758b199bc8e88aa1678bf44ba4a91651732b884484b2ecbaceb920d" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.635718 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xvm8"] Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.639020 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xvm8"] Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.663373 4924 scope.go:117] "RemoveContainer" containerID="4cece45d59e8d48210e1c583dfe8073343ceae265067c5cbb1bf6f4666c7418c" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.678974 4924 scope.go:117] "RemoveContainer" containerID="294e22e7c144be9be4700c46ad769306e2f139a6ef0723c4e76ba59b95ea968d" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.701958 4924 scope.go:117] "RemoveContainer" containerID="603d1a1d803a2351a9974054e97af0efe401ec0340e5e84f35b44b52ed526b91" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.952512 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w6xsm"] Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.953167 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="extract-utilities" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.953270 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="extract-utilities" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.953351 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="extract-content" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.953413 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="extract-content" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.953539 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.953618 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.953681 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="extract-content" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.953742 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="extract-content" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.953803 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="extract-utilities" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.953862 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="extract-utilities" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.953924 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.953988 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.954045 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.954098 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.954156 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="extract-content" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.954215 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="extract-content" Dec 11 14:01:09 crc kubenswrapper[4924]: E1211 14:01:09.954271 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="extract-utilities" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.954350 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="extract-utilities" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.954538 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.954634 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b178dc2-db02-45b7-a589-b1e71d29c50e" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.954709 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" containerName="registry-server" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.955601 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.964513 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6xsm"] Dec 11 14:01:09 crc kubenswrapper[4924]: I1211 14:01:09.965895 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.035771 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvc5s\" (UniqueName: \"kubernetes.io/projected/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-kube-api-access-mvc5s\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.036279 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-catalog-content\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.036461 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-utilities\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.138100 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-utilities\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.138233 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvc5s\" (UniqueName: \"kubernetes.io/projected/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-kube-api-access-mvc5s\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.138281 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-catalog-content\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.139050 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-catalog-content\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.139579 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-utilities\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.160804 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvc5s\" (UniqueName: \"kubernetes.io/projected/1531a8b4-8fbe-46d1-b7fe-e4ef93f57224-kube-api-access-mvc5s\") pod \"certified-operators-w6xsm\" (UID: \"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224\") " pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.276432 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.501987 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w6xsm"] Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.548416 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qg72j"] Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.549862 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.553233 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.559187 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qg72j"] Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.576885 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsm" event={"ID":"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224","Type":"ContainerStarted","Data":"1e4a9e70aebf1445d5046dd99286d202f191daba01cb28f6d28c93fc53bfc2c4"} Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.577973 4924 generic.go:334] "Generic (PLEG): container finished" podID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerID="31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696" exitCode=0 Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.578008 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerDied","Data":"31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696"} Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.584701 4924 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.644301 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq9v9\" (UniqueName: \"kubernetes.io/projected/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-kube-api-access-kq9v9\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.644451 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-catalog-content\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.644485 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-utilities\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.758586 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq9v9\" (UniqueName: \"kubernetes.io/projected/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-kube-api-access-kq9v9\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.759309 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-catalog-content\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.759374 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-utilities\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.760852 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-catalog-content\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.761067 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-utilities\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.778102 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq9v9\" (UniqueName: \"kubernetes.io/projected/532a71ee-9b39-4e46-9900-cdb1a1bea3ec-kube-api-access-kq9v9\") pod \"community-operators-qg72j\" (UID: \"532a71ee-9b39-4e46-9900-cdb1a1bea3ec\") " pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.789377 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1" path="/var/lib/kubelet/pods/2e98271d-3b5f-4c0d-963f-3d4ec1e0aad1/volumes" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.790028 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bceef104-5373-46a2-b7d9-5cc5782449f6" path="/var/lib/kubelet/pods/bceef104-5373-46a2-b7d9-5cc5782449f6/volumes" Dec 11 14:01:10 crc kubenswrapper[4924]: I1211 14:01:10.973663 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:11 crc kubenswrapper[4924]: I1211 14:01:11.374828 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qg72j"] Dec 11 14:01:11 crc kubenswrapper[4924]: W1211 14:01:11.376038 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod532a71ee_9b39_4e46_9900_cdb1a1bea3ec.slice/crio-e5e4d79b93f7fa8e59982f66102375f57409ffa346fdeb88eeb25210a9d5b3b9 WatchSource:0}: Error finding container e5e4d79b93f7fa8e59982f66102375f57409ffa346fdeb88eeb25210a9d5b3b9: Status 404 returned error can't find the container with id e5e4d79b93f7fa8e59982f66102375f57409ffa346fdeb88eeb25210a9d5b3b9 Dec 11 14:01:11 crc kubenswrapper[4924]: I1211 14:01:11.588195 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qg72j" event={"ID":"532a71ee-9b39-4e46-9900-cdb1a1bea3ec","Type":"ContainerStarted","Data":"e5e4d79b93f7fa8e59982f66102375f57409ffa346fdeb88eeb25210a9d5b3b9"} Dec 11 14:01:11 crc kubenswrapper[4924]: I1211 14:01:11.589823 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsm" event={"ID":"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224","Type":"ContainerStarted","Data":"a75534d2efb4aa2fd0c086c422a3a9f0d35f0562fcadfbc06b8adcd2d6e2e095"} Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.611787 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qg72j" event={"ID":"532a71ee-9b39-4e46-9900-cdb1a1bea3ec","Type":"ContainerStarted","Data":"2933428a0f6ec105ee6a49b8b926a08a24dcd1e63a9baf75249001a7197cfc7e"} Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.616406 4924 generic.go:334] "Generic (PLEG): container finished" podID="1531a8b4-8fbe-46d1-b7fe-e4ef93f57224" containerID="a75534d2efb4aa2fd0c086c422a3a9f0d35f0562fcadfbc06b8adcd2d6e2e095" exitCode=0 Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.616487 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsm" event={"ID":"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224","Type":"ContainerDied","Data":"a75534d2efb4aa2fd0c086c422a3a9f0d35f0562fcadfbc06b8adcd2d6e2e095"} Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.948865 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4v7vf"] Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.950251 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.952117 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 14:01:12 crc kubenswrapper[4924]: I1211 14:01:12.960089 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4v7vf"] Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.040535 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-zmlx7" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.091959 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-utilities\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.092034 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-catalog-content\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.092194 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5fzm\" (UniqueName: \"kubernetes.io/projected/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-kube-api-access-k5fzm\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.095827 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwnfk"] Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.193760 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-utilities\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.193814 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-catalog-content\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.193853 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fzm\" (UniqueName: \"kubernetes.io/projected/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-kube-api-access-k5fzm\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.194741 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-utilities\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.194822 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-catalog-content\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.217509 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fzm\" (UniqueName: \"kubernetes.io/projected/9e52be9c-2b4a-4fff-bb8e-24d5725d6c41-kube-api-access-k5fzm\") pod \"redhat-operators-4v7vf\" (UID: \"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41\") " pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.280586 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.624252 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerStarted","Data":"f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45"} Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.626225 4924 generic.go:334] "Generic (PLEG): container finished" podID="532a71ee-9b39-4e46-9900-cdb1a1bea3ec" containerID="2933428a0f6ec105ee6a49b8b926a08a24dcd1e63a9baf75249001a7197cfc7e" exitCode=0 Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.626344 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qg72j" event={"ID":"532a71ee-9b39-4e46-9900-cdb1a1bea3ec","Type":"ContainerDied","Data":"2933428a0f6ec105ee6a49b8b926a08a24dcd1e63a9baf75249001a7197cfc7e"} Dec 11 14:01:13 crc kubenswrapper[4924]: I1211 14:01:13.737483 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4v7vf"] Dec 11 14:01:14 crc kubenswrapper[4924]: I1211 14:01:14.633849 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v7vf" event={"ID":"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41","Type":"ContainerDied","Data":"01987aa239c148549a9cd013fc5a3c9c3a2cd51e2db9c1decdc1b86630d4d070"} Dec 11 14:01:14 crc kubenswrapper[4924]: I1211 14:01:14.633435 4924 generic.go:334] "Generic (PLEG): container finished" podID="9e52be9c-2b4a-4fff-bb8e-24d5725d6c41" containerID="01987aa239c148549a9cd013fc5a3c9c3a2cd51e2db9c1decdc1b86630d4d070" exitCode=0 Dec 11 14:01:14 crc kubenswrapper[4924]: I1211 14:01:14.634774 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v7vf" event={"ID":"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41","Type":"ContainerStarted","Data":"0c2b98523a482b4d87fe81af1694f3e7bd5befa108d601fd06dd15ab15e1cbce"} Dec 11 14:01:14 crc kubenswrapper[4924]: I1211 14:01:14.638378 4924 generic.go:334] "Generic (PLEG): container finished" podID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerID="f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45" exitCode=0 Dec 11 14:01:14 crc kubenswrapper[4924]: I1211 14:01:14.638416 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerDied","Data":"f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45"} Dec 11 14:01:15 crc kubenswrapper[4924]: I1211 14:01:15.656428 4924 generic.go:334] "Generic (PLEG): container finished" podID="1531a8b4-8fbe-46d1-b7fe-e4ef93f57224" containerID="70fd292875693e7c4900f1fc6d5d8398a90a951f118b312d894b8d13022926d1" exitCode=0 Dec 11 14:01:15 crc kubenswrapper[4924]: I1211 14:01:15.656537 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsm" event={"ID":"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224","Type":"ContainerDied","Data":"70fd292875693e7c4900f1fc6d5d8398a90a951f118b312d894b8d13022926d1"} Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.681736 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v7vf" event={"ID":"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41","Type":"ContainerStarted","Data":"ad0a8e333fd2a958b540358e69a259812acd77edc503eecda0a981d012deb3e8"} Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.686298 4924 generic.go:334] "Generic (PLEG): container finished" podID="532a71ee-9b39-4e46-9900-cdb1a1bea3ec" containerID="128858ae73d5aa9106320adf5abbbd20c4244437ab6a5488b983b6c0b664a083" exitCode=0 Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.686376 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qg72j" event={"ID":"532a71ee-9b39-4e46-9900-cdb1a1bea3ec","Type":"ContainerDied","Data":"128858ae73d5aa9106320adf5abbbd20c4244437ab6a5488b983b6c0b664a083"} Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.690299 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w6xsm" event={"ID":"1531a8b4-8fbe-46d1-b7fe-e4ef93f57224","Type":"ContainerStarted","Data":"297c47a826d8e666518f5ba1a1df5748a0672f4e3af6e61354ee7b41dddf8fd6"} Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.692476 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerStarted","Data":"48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d"} Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.722429 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wkhr4" podStartSLOduration=5.530456948 podStartE2EDuration="11.722409465s" podCreationTimestamp="2025-12-11 14:01:07 +0000 UTC" firstStartedPulling="2025-12-11 14:01:10.584490817 +0000 UTC m=+484.093971784" lastFinishedPulling="2025-12-11 14:01:16.776443324 +0000 UTC m=+490.285924301" observedRunningTime="2025-12-11 14:01:18.720402958 +0000 UTC m=+492.229883945" watchObservedRunningTime="2025-12-11 14:01:18.722409465 +0000 UTC m=+492.231890442" Dec 11 14:01:18 crc kubenswrapper[4924]: I1211 14:01:18.756528 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w6xsm" podStartSLOduration=6.461360593 podStartE2EDuration="9.756505745s" podCreationTimestamp="2025-12-11 14:01:09 +0000 UTC" firstStartedPulling="2025-12-11 14:01:13.627631043 +0000 UTC m=+487.137112020" lastFinishedPulling="2025-12-11 14:01:16.922776195 +0000 UTC m=+490.432257172" observedRunningTime="2025-12-11 14:01:18.753648374 +0000 UTC m=+492.263129351" watchObservedRunningTime="2025-12-11 14:01:18.756505745 +0000 UTC m=+492.265986732" Dec 11 14:01:19 crc kubenswrapper[4924]: I1211 14:01:19.699272 4924 generic.go:334] "Generic (PLEG): container finished" podID="9e52be9c-2b4a-4fff-bb8e-24d5725d6c41" containerID="ad0a8e333fd2a958b540358e69a259812acd77edc503eecda0a981d012deb3e8" exitCode=0 Dec 11 14:01:19 crc kubenswrapper[4924]: I1211 14:01:19.699379 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v7vf" event={"ID":"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41","Type":"ContainerDied","Data":"ad0a8e333fd2a958b540358e69a259812acd77edc503eecda0a981d012deb3e8"} Dec 11 14:01:20 crc kubenswrapper[4924]: I1211 14:01:20.277761 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:20 crc kubenswrapper[4924]: I1211 14:01:20.277797 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:20 crc kubenswrapper[4924]: I1211 14:01:20.315954 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:22 crc kubenswrapper[4924]: I1211 14:01:22.717095 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4v7vf" event={"ID":"9e52be9c-2b4a-4fff-bb8e-24d5725d6c41","Type":"ContainerStarted","Data":"ebb083b4e2506a8e1ddc08c7b5d3a374d0999b4ac18765acb91c5720e555b5b4"} Dec 11 14:01:22 crc kubenswrapper[4924]: I1211 14:01:22.720362 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qg72j" event={"ID":"532a71ee-9b39-4e46-9900-cdb1a1bea3ec","Type":"ContainerStarted","Data":"ba4c3f8310b280adc55ff6c037d9e6923827d307cae4e5aeed93401f43417bad"} Dec 11 14:01:22 crc kubenswrapper[4924]: I1211 14:01:22.732669 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4v7vf" podStartSLOduration=3.084939202 podStartE2EDuration="10.732650145s" podCreationTimestamp="2025-12-11 14:01:12 +0000 UTC" firstStartedPulling="2025-12-11 14:01:14.651465095 +0000 UTC m=+488.160946072" lastFinishedPulling="2025-12-11 14:01:22.299176038 +0000 UTC m=+495.808657015" observedRunningTime="2025-12-11 14:01:22.731854442 +0000 UTC m=+496.241335419" watchObservedRunningTime="2025-12-11 14:01:22.732650145 +0000 UTC m=+496.242131122" Dec 11 14:01:22 crc kubenswrapper[4924]: I1211 14:01:22.749957 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qg72j" podStartSLOduration=5.96092774 podStartE2EDuration="12.749939881s" podCreationTimestamp="2025-12-11 14:01:10 +0000 UTC" firstStartedPulling="2025-12-11 14:01:13.629123425 +0000 UTC m=+487.138604402" lastFinishedPulling="2025-12-11 14:01:20.418135566 +0000 UTC m=+493.927616543" observedRunningTime="2025-12-11 14:01:22.746977778 +0000 UTC m=+496.256458755" watchObservedRunningTime="2025-12-11 14:01:22.749939881 +0000 UTC m=+496.259420858" Dec 11 14:01:23 crc kubenswrapper[4924]: I1211 14:01:23.281768 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:23 crc kubenswrapper[4924]: I1211 14:01:23.281831 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:24 crc kubenswrapper[4924]: I1211 14:01:24.321344 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4v7vf" podUID="9e52be9c-2b4a-4fff-bb8e-24d5725d6c41" containerName="registry-server" probeResult="failure" output=< Dec 11 14:01:24 crc kubenswrapper[4924]: timeout: failed to connect service ":50051" within 1s Dec 11 14:01:24 crc kubenswrapper[4924]: > Dec 11 14:01:27 crc kubenswrapper[4924]: I1211 14:01:27.882015 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:27 crc kubenswrapper[4924]: I1211 14:01:27.882369 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:27 crc kubenswrapper[4924]: I1211 14:01:27.921943 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:28 crc kubenswrapper[4924]: I1211 14:01:28.791272 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:01:30 crc kubenswrapper[4924]: I1211 14:01:30.320185 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w6xsm" Dec 11 14:01:30 crc kubenswrapper[4924]: I1211 14:01:30.974191 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:30 crc kubenswrapper[4924]: I1211 14:01:30.974288 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:31 crc kubenswrapper[4924]: I1211 14:01:31.038194 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:31 crc kubenswrapper[4924]: I1211 14:01:31.816377 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qg72j" Dec 11 14:01:33 crc kubenswrapper[4924]: I1211 14:01:33.327444 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:33 crc kubenswrapper[4924]: I1211 14:01:33.385920 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4v7vf" Dec 11 14:01:38 crc kubenswrapper[4924]: I1211 14:01:38.133990 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" podUID="167e3306-54e1-470a-a7d6-55b2742ca45e" containerName="registry" containerID="cri-o://65e9869c9548122c141a4fea1c08c25652d121893d812a078c6fe687f6737425" gracePeriod=30 Dec 11 14:01:39 crc kubenswrapper[4924]: I1211 14:01:39.809313 4924 generic.go:334] "Generic (PLEG): container finished" podID="167e3306-54e1-470a-a7d6-55b2742ca45e" containerID="65e9869c9548122c141a4fea1c08c25652d121893d812a078c6fe687f6737425" exitCode=0 Dec 11 14:01:39 crc kubenswrapper[4924]: I1211 14:01:39.809574 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" event={"ID":"167e3306-54e1-470a-a7d6-55b2742ca45e","Type":"ContainerDied","Data":"65e9869c9548122c141a4fea1c08c25652d121893d812a078c6fe687f6737425"} Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.459363 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510615 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/167e3306-54e1-470a-a7d6-55b2742ca45e-installation-pull-secrets\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510684 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-certificates\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510785 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-tls\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510806 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8n9z\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-kube-api-access-q8n9z\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510841 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/167e3306-54e1-470a-a7d6-55b2742ca45e-ca-trust-extracted\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510863 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-bound-sa-token\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.510884 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-trusted-ca\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.511005 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"167e3306-54e1-470a-a7d6-55b2742ca45e\" (UID: \"167e3306-54e1-470a-a7d6-55b2742ca45e\") " Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.512493 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.512587 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.517231 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-kube-api-access-q8n9z" (OuterVolumeSpecName: "kube-api-access-q8n9z") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "kube-api-access-q8n9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.517484 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.518099 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167e3306-54e1-470a-a7d6-55b2742ca45e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.518308 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.519182 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.528108 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/167e3306-54e1-470a-a7d6-55b2742ca45e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "167e3306-54e1-470a-a7d6-55b2742ca45e" (UID: "167e3306-54e1-470a-a7d6-55b2742ca45e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616034 4924 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/167e3306-54e1-470a-a7d6-55b2742ca45e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616073 4924 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616083 4924 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616093 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8n9z\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-kube-api-access-q8n9z\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616106 4924 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/167e3306-54e1-470a-a7d6-55b2742ca45e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616115 4924 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/167e3306-54e1-470a-a7d6-55b2742ca45e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.616124 4924 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/167e3306-54e1-470a-a7d6-55b2742ca45e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.816813 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" event={"ID":"167e3306-54e1-470a-a7d6-55b2742ca45e","Type":"ContainerDied","Data":"30c21cbe9726d1ac8840b7a60e4ddeefd65a38e67fcb564bc90e4725abd6cfe5"} Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.817148 4924 scope.go:117] "RemoveContainer" containerID="65e9869c9548122c141a4fea1c08c25652d121893d812a078c6fe687f6737425" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.816909 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vwnfk" Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.845198 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwnfk"] Dec 11 14:01:40 crc kubenswrapper[4924]: I1211 14:01:40.848783 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vwnfk"] Dec 11 14:01:42 crc kubenswrapper[4924]: I1211 14:01:42.795135 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="167e3306-54e1-470a-a7d6-55b2742ca45e" path="/var/lib/kubelet/pods/167e3306-54e1-470a-a7d6-55b2742ca45e/volumes" Dec 11 14:02:15 crc kubenswrapper[4924]: I1211 14:02:15.433844 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:02:15 crc kubenswrapper[4924]: I1211 14:02:15.434342 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:02:31 crc kubenswrapper[4924]: I1211 14:02:31.852311 4924 scope.go:117] "RemoveContainer" containerID="a72ad10e549deac73f58d940a2212eadcb890975767eace8619ae17616f81e82" Dec 11 14:02:45 crc kubenswrapper[4924]: I1211 14:02:45.433972 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:02:45 crc kubenswrapper[4924]: I1211 14:02:45.434678 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:03:15 crc kubenswrapper[4924]: I1211 14:03:15.433790 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:03:15 crc kubenswrapper[4924]: I1211 14:03:15.434547 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:03:15 crc kubenswrapper[4924]: I1211 14:03:15.434610 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:03:15 crc kubenswrapper[4924]: I1211 14:03:15.435479 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"405606bd5064b5b5b954e418b973de9eb6cb19ea385b134343634f3149f51d7f"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:03:15 crc kubenswrapper[4924]: I1211 14:03:15.435579 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://405606bd5064b5b5b954e418b973de9eb6cb19ea385b134343634f3149f51d7f" gracePeriod=600 Dec 11 14:03:16 crc kubenswrapper[4924]: I1211 14:03:16.307454 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="405606bd5064b5b5b954e418b973de9eb6cb19ea385b134343634f3149f51d7f" exitCode=0 Dec 11 14:03:16 crc kubenswrapper[4924]: I1211 14:03:16.307518 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"405606bd5064b5b5b954e418b973de9eb6cb19ea385b134343634f3149f51d7f"} Dec 11 14:03:16 crc kubenswrapper[4924]: I1211 14:03:16.307949 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"0cfc937b59df394dc479967ee3b851d03bd936dae5a6426167548bc775a14bcd"} Dec 11 14:03:16 crc kubenswrapper[4924]: I1211 14:03:16.307991 4924 scope.go:117] "RemoveContainer" containerID="c79fce7fa0c1a857b32a9d68eaa5e8584a74fcf871adf90d33f6d45436b5aac8" Dec 11 14:03:31 crc kubenswrapper[4924]: I1211 14:03:31.886982 4924 scope.go:117] "RemoveContainer" containerID="6e1654c7b7a9e8aa4b129bbcf2f15611cee79f52da0dd25bfe745c7c721c1fd4" Dec 11 14:03:31 crc kubenswrapper[4924]: I1211 14:03:31.905171 4924 scope.go:117] "RemoveContainer" containerID="b5316e8315be2ccd5c6aef7c9c479c1523bc559dc6a19cacd1511bd3e1e8931e" Dec 11 14:05:15 crc kubenswrapper[4924]: I1211 14:05:15.432837 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:05:15 crc kubenswrapper[4924]: I1211 14:05:15.433488 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:05:38 crc kubenswrapper[4924]: I1211 14:05:38.708114 4924 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 14:05:45 crc kubenswrapper[4924]: I1211 14:05:45.433826 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:05:45 crc kubenswrapper[4924]: I1211 14:05:45.434402 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:06:15 crc kubenswrapper[4924]: I1211 14:06:15.433004 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:06:15 crc kubenswrapper[4924]: I1211 14:06:15.433549 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:06:15 crc kubenswrapper[4924]: I1211 14:06:15.433602 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:06:15 crc kubenswrapper[4924]: I1211 14:06:15.434251 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cfc937b59df394dc479967ee3b851d03bd936dae5a6426167548bc775a14bcd"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:06:15 crc kubenswrapper[4924]: I1211 14:06:15.434341 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://0cfc937b59df394dc479967ee3b851d03bd936dae5a6426167548bc775a14bcd" gracePeriod=600 Dec 11 14:06:16 crc kubenswrapper[4924]: I1211 14:06:16.287181 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="0cfc937b59df394dc479967ee3b851d03bd936dae5a6426167548bc775a14bcd" exitCode=0 Dec 11 14:06:16 crc kubenswrapper[4924]: I1211 14:06:16.287433 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"0cfc937b59df394dc479967ee3b851d03bd936dae5a6426167548bc775a14bcd"} Dec 11 14:06:16 crc kubenswrapper[4924]: I1211 14:06:16.287801 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"7e4cea8eb422e0d935dde86db1abf1fbdf5c2a1faa07d50193b201cd5df925d4"} Dec 11 14:06:16 crc kubenswrapper[4924]: I1211 14:06:16.287827 4924 scope.go:117] "RemoveContainer" containerID="405606bd5064b5b5b954e418b973de9eb6cb19ea385b134343634f3149f51d7f" Dec 11 14:06:31 crc kubenswrapper[4924]: I1211 14:06:31.972028 4924 scope.go:117] "RemoveContainer" containerID="de8950371c9bdbfcefb1d529004eb68e3e9c1dead6abd1cd7e097a2abc6663eb" Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.188553 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jnlw"] Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.189600 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-controller" containerID="cri-o://52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.189971 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="sbdb" containerID="cri-o://ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.190019 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="nbdb" containerID="cri-o://2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.190057 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="northd" containerID="cri-o://57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.190096 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.190132 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-node" containerID="cri-o://43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.190168 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-acl-logging" containerID="cri-o://a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" gracePeriod=30 Dec 11 14:06:38 crc kubenswrapper[4924]: I1211 14:06:38.219477 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" containerID="cri-o://ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" gracePeriod=30 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.366054 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/3.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.368932 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovn-acl-logging/0.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.370016 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovn-controller/0.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.370748 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.429107 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovnkube-controller/3.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.434810 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovn-acl-logging/0.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.435925 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8jnlw_47432eab-9072-43ce-9bf7-0dbd6fa271e7/ovn-controller/0.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436423 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" exitCode=0 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436444 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" exitCode=0 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436452 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" exitCode=0 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436462 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" exitCode=0 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436469 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" exitCode=0 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436475 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" exitCode=0 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436482 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" exitCode=143 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436488 4924 generic.go:334] "Generic (PLEG): container finished" podID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" exitCode=143 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436498 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436574 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436587 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436597 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436607 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436615 4924 scope.go:117] "RemoveContainer" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436618 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436723 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436737 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436742 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436748 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436753 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436759 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436765 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436770 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436776 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436785 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436794 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436800 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436805 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436810 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436816 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436820 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436826 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436831 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436836 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436841 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436848 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436856 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436862 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436868 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436873 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436878 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436889 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436895 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436900 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436905 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436910 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436916 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" event={"ID":"47432eab-9072-43ce-9bf7-0dbd6fa271e7","Type":"ContainerDied","Data":"e106210478db1204bd21cdc89005806723d773d95fbb0d3a2ee25194714f7df5"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436925 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436930 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436936 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436941 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436946 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436952 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436956 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436962 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436967 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.436972 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.437033 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jnlw" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.439133 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/2.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.442944 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/1.log" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.443025 4924 generic.go:334] "Generic (PLEG): container finished" podID="5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c" containerID="4da70e3dffee1c01a4ec9a873590ee5253b2b0924a75980fc26697bf92ddaa41" exitCode=2 Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.443083 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerDied","Data":"4da70e3dffee1c01a4ec9a873590ee5253b2b0924a75980fc26697bf92ddaa41"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.443127 4924 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c"} Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.443869 4924 scope.go:117] "RemoveContainer" containerID="4da70e3dffee1c01a4ec9a873590ee5253b2b0924a75980fc26697bf92ddaa41" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446643 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-p9xnd"] Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446837 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446855 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446866 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kubecfg-setup" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446874 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kubecfg-setup" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446886 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="northd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446894 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="northd" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446905 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-node" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446911 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-node" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446919 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="sbdb" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446924 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="sbdb" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446933 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167e3306-54e1-470a-a7d6-55b2742ca45e" containerName="registry" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446939 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="167e3306-54e1-470a-a7d6-55b2742ca45e" containerName="registry" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446947 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446953 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446962 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446968 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446975 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-acl-logging" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446981 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-acl-logging" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.446988 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.446993 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.447000 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447006 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.447014 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447020 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.447029 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="nbdb" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447035 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="nbdb" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447122 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447135 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="sbdb" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447146 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="167e3306-54e1-470a-a7d6-55b2742ca45e" containerName="registry" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447155 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447163 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447169 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447176 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="northd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447182 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447190 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="kube-rbac-proxy-node" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447199 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="nbdb" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447208 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovn-acl-logging" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.447296 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447302 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447430 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.447645 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" containerName="ovnkube-controller" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.449828 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450318 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovn-node-metrics-cert\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450407 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-var-lib-openvswitch\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450449 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-netns\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450479 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-bin\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450522 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-ovn\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450555 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-etc-openvswitch\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450602 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-script-lib\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450648 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-systemd-units\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450681 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-env-overrides\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450871 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-config\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450903 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-ovn-kubernetes\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450930 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-log-socket\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450961 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.450998 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8d9c\" (UniqueName: \"kubernetes.io/projected/47432eab-9072-43ce-9bf7-0dbd6fa271e7-kube-api-access-k8d9c\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.451033 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-node-log\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.451064 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-systemd\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.451096 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-openvswitch\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.451122 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-netd\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.451153 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-kubelet\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.451187 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-slash\") pod \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\" (UID: \"47432eab-9072-43ce-9bf7-0dbd6fa271e7\") " Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452145 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-log-socket" (OuterVolumeSpecName: "log-socket") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452214 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452215 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452251 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452271 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452296 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452306 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-slash" (OuterVolumeSpecName: "host-slash") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452306 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452406 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452692 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-node-log" (OuterVolumeSpecName: "node-log") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452981 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452983 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.453035 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.453146 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.453242 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.454007 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.458002 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47432eab-9072-43ce-9bf7-0dbd6fa271e7-kube-api-access-k8d9c" (OuterVolumeSpecName: "kube-api-access-k8d9c") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "kube-api-access-k8d9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.452453 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.468444 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.479765 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "47432eab-9072-43ce-9bf7-0dbd6fa271e7" (UID: "47432eab-9072-43ce-9bf7-0dbd6fa271e7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.507493 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.547826 4924 scope.go:117] "RemoveContainer" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.552735 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-ovnkube-script-lib\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.552875 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-run-ovn-kubernetes\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.552981 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-cni-netd\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553083 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-cni-bin\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553122 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-log-socket\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553145 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25f46326-7b99-48d3-8877-3eb63321e63e-ovn-node-metrics-cert\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553167 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-var-lib-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553215 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-env-overrides\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553232 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9cr\" (UniqueName: \"kubernetes.io/projected/25f46326-7b99-48d3-8877-3eb63321e63e-kube-api-access-5x9cr\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553274 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-ovnkube-config\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553303 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-systemd\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553367 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-systemd-units\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553399 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553444 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-node-log\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553466 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-etc-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553522 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-ovn\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553542 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-run-netns\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553567 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-slash\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553588 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553648 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-kubelet\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553711 4924 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553749 4924 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553792 4924 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553801 4924 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553809 4924 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553817 4924 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553824 4924 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553832 4924 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553840 4924 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553848 4924 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553856 4924 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553864 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8d9c\" (UniqueName: \"kubernetes.io/projected/47432eab-9072-43ce-9bf7-0dbd6fa271e7-kube-api-access-k8d9c\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553872 4924 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553880 4924 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553887 4924 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553895 4924 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553902 4924 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553911 4924 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553920 4924 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47432eab-9072-43ce-9bf7-0dbd6fa271e7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.553928 4924 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/47432eab-9072-43ce-9bf7-0dbd6fa271e7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.573671 4924 scope.go:117] "RemoveContainer" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.595001 4924 scope.go:117] "RemoveContainer" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.607565 4924 scope.go:117] "RemoveContainer" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.619675 4924 scope.go:117] "RemoveContainer" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.635405 4924 scope.go:117] "RemoveContainer" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.647179 4924 scope.go:117] "RemoveContainer" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.654936 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-run-netns\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.654976 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-slash\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655003 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655004 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-run-netns\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655031 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-kubelet\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655053 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-ovnkube-script-lib\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655056 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-slash\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655077 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-run-ovn-kubernetes\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655084 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-kubelet\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655102 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-cni-netd\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655128 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-cni-bin\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655144 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-run-ovn-kubernetes\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655123 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655151 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-cni-netd\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655173 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-host-cni-bin\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655151 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-log-socket\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655198 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-log-socket\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655291 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25f46326-7b99-48d3-8877-3eb63321e63e-ovn-node-metrics-cert\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655347 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-var-lib-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655417 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-env-overrides\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655440 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9cr\" (UniqueName: \"kubernetes.io/projected/25f46326-7b99-48d3-8877-3eb63321e63e-kube-api-access-5x9cr\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655483 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-var-lib-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655488 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-ovnkube-config\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655528 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-systemd\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655546 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-systemd-units\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655569 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655596 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-node-log\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655629 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-etc-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655649 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-ovn\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655721 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-ovn\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655744 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-systemd\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655765 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-systemd-units\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655784 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-run-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655800 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-node-log\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.655809 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/25f46326-7b99-48d3-8877-3eb63321e63e-etc-openvswitch\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.656014 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-env-overrides\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.656243 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-ovnkube-config\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.656356 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/25f46326-7b99-48d3-8877-3eb63321e63e-ovnkube-script-lib\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.659784 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/25f46326-7b99-48d3-8877-3eb63321e63e-ovn-node-metrics-cert\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.660541 4924 scope.go:117] "RemoveContainer" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.671862 4924 scope.go:117] "RemoveContainer" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.672189 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": container with ID starting with ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d not found: ID does not exist" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.672240 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} err="failed to get container status \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": rpc error: code = NotFound desc = could not find container \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": container with ID starting with ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.672267 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.672565 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": container with ID starting with 008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4 not found: ID does not exist" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.672596 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} err="failed to get container status \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": rpc error: code = NotFound desc = could not find container \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": container with ID starting with 008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.672616 4924 scope.go:117] "RemoveContainer" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.673475 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9cr\" (UniqueName: \"kubernetes.io/projected/25f46326-7b99-48d3-8877-3eb63321e63e-kube-api-access-5x9cr\") pod \"ovnkube-node-p9xnd\" (UID: \"25f46326-7b99-48d3-8877-3eb63321e63e\") " pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.673686 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": container with ID starting with ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54 not found: ID does not exist" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.673717 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} err="failed to get container status \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": rpc error: code = NotFound desc = could not find container \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": container with ID starting with ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.673733 4924 scope.go:117] "RemoveContainer" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.674007 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": container with ID starting with 2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13 not found: ID does not exist" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.674039 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} err="failed to get container status \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": rpc error: code = NotFound desc = could not find container \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": container with ID starting with 2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.674058 4924 scope.go:117] "RemoveContainer" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.674421 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": container with ID starting with 57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f not found: ID does not exist" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.674455 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} err="failed to get container status \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": rpc error: code = NotFound desc = could not find container \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": container with ID starting with 57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.674477 4924 scope.go:117] "RemoveContainer" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.674797 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": container with ID starting with 5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28 not found: ID does not exist" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.674827 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} err="failed to get container status \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": rpc error: code = NotFound desc = could not find container \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": container with ID starting with 5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.674878 4924 scope.go:117] "RemoveContainer" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.675179 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": container with ID starting with 43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297 not found: ID does not exist" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.675219 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} err="failed to get container status \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": rpc error: code = NotFound desc = could not find container \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": container with ID starting with 43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.675243 4924 scope.go:117] "RemoveContainer" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.675517 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": container with ID starting with a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9 not found: ID does not exist" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.675555 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} err="failed to get container status \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": rpc error: code = NotFound desc = could not find container \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": container with ID starting with a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.675572 4924 scope.go:117] "RemoveContainer" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.675820 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": container with ID starting with 52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde not found: ID does not exist" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.675860 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} err="failed to get container status \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": rpc error: code = NotFound desc = could not find container \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": container with ID starting with 52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.675884 4924 scope.go:117] "RemoveContainer" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" Dec 11 14:06:39 crc kubenswrapper[4924]: E1211 14:06:39.676120 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": container with ID starting with 5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7 not found: ID does not exist" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.676147 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} err="failed to get container status \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": rpc error: code = NotFound desc = could not find container \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": container with ID starting with 5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.676168 4924 scope.go:117] "RemoveContainer" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.676476 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} err="failed to get container status \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": rpc error: code = NotFound desc = could not find container \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": container with ID starting with ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.676496 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.676851 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} err="failed to get container status \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": rpc error: code = NotFound desc = could not find container \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": container with ID starting with 008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.676876 4924 scope.go:117] "RemoveContainer" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.677093 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} err="failed to get container status \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": rpc error: code = NotFound desc = could not find container \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": container with ID starting with ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.677115 4924 scope.go:117] "RemoveContainer" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.677355 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} err="failed to get container status \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": rpc error: code = NotFound desc = could not find container \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": container with ID starting with 2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.677382 4924 scope.go:117] "RemoveContainer" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.677686 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} err="failed to get container status \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": rpc error: code = NotFound desc = could not find container \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": container with ID starting with 57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.677752 4924 scope.go:117] "RemoveContainer" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678020 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} err="failed to get container status \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": rpc error: code = NotFound desc = could not find container \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": container with ID starting with 5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678042 4924 scope.go:117] "RemoveContainer" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678245 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} err="failed to get container status \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": rpc error: code = NotFound desc = could not find container \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": container with ID starting with 43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678263 4924 scope.go:117] "RemoveContainer" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678495 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} err="failed to get container status \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": rpc error: code = NotFound desc = could not find container \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": container with ID starting with a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678517 4924 scope.go:117] "RemoveContainer" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678756 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} err="failed to get container status \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": rpc error: code = NotFound desc = could not find container \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": container with ID starting with 52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678774 4924 scope.go:117] "RemoveContainer" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678947 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} err="failed to get container status \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": rpc error: code = NotFound desc = could not find container \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": container with ID starting with 5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.678974 4924 scope.go:117] "RemoveContainer" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679161 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} err="failed to get container status \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": rpc error: code = NotFound desc = could not find container \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": container with ID starting with ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679180 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679421 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} err="failed to get container status \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": rpc error: code = NotFound desc = could not find container \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": container with ID starting with 008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679438 4924 scope.go:117] "RemoveContainer" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679641 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} err="failed to get container status \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": rpc error: code = NotFound desc = could not find container \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": container with ID starting with ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679656 4924 scope.go:117] "RemoveContainer" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679835 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} err="failed to get container status \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": rpc error: code = NotFound desc = could not find container \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": container with ID starting with 2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.679857 4924 scope.go:117] "RemoveContainer" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680102 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} err="failed to get container status \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": rpc error: code = NotFound desc = could not find container \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": container with ID starting with 57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680152 4924 scope.go:117] "RemoveContainer" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680354 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} err="failed to get container status \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": rpc error: code = NotFound desc = could not find container \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": container with ID starting with 5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680373 4924 scope.go:117] "RemoveContainer" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680628 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} err="failed to get container status \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": rpc error: code = NotFound desc = could not find container \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": container with ID starting with 43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680659 4924 scope.go:117] "RemoveContainer" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680904 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} err="failed to get container status \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": rpc error: code = NotFound desc = could not find container \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": container with ID starting with a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.680921 4924 scope.go:117] "RemoveContainer" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681133 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} err="failed to get container status \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": rpc error: code = NotFound desc = could not find container \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": container with ID starting with 52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681149 4924 scope.go:117] "RemoveContainer" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681369 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} err="failed to get container status \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": rpc error: code = NotFound desc = could not find container \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": container with ID starting with 5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681383 4924 scope.go:117] "RemoveContainer" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681652 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} err="failed to get container status \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": rpc error: code = NotFound desc = could not find container \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": container with ID starting with ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681671 4924 scope.go:117] "RemoveContainer" containerID="008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681898 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4"} err="failed to get container status \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": rpc error: code = NotFound desc = could not find container \"008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4\": container with ID starting with 008fd890ac88b37cb657b463cccf1f51cd3d6e96d45afe00d31f986dab9b6fa4 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.681915 4924 scope.go:117] "RemoveContainer" containerID="ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682215 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54"} err="failed to get container status \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": rpc error: code = NotFound desc = could not find container \"ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54\": container with ID starting with ff6b34eec918bc803f49d54aa12ea0b8a627e66f66eed2bf5226d6f1e61c5d54 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682235 4924 scope.go:117] "RemoveContainer" containerID="2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682471 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13"} err="failed to get container status \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": rpc error: code = NotFound desc = could not find container \"2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13\": container with ID starting with 2e9a080ed95223eda052063990bcc01eeab66e47bf860c6c9ae49acb25d79a13 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682488 4924 scope.go:117] "RemoveContainer" containerID="57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682708 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f"} err="failed to get container status \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": rpc error: code = NotFound desc = could not find container \"57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f\": container with ID starting with 57e86b87c78a7c5b8a669f072ab535056c6fa3c8a352ef1ca11693d845af407f not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682727 4924 scope.go:117] "RemoveContainer" containerID="5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682956 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28"} err="failed to get container status \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": rpc error: code = NotFound desc = could not find container \"5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28\": container with ID starting with 5b0b6ff2139cb427a7c1ca989ba7edc7658cc273c8929bf358da7879137f6d28 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.682984 4924 scope.go:117] "RemoveContainer" containerID="43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.683224 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297"} err="failed to get container status \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": rpc error: code = NotFound desc = could not find container \"43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297\": container with ID starting with 43efc44df337efceade16e16ec69569b66f15e399d7d4f3becf2231668abd297 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.683242 4924 scope.go:117] "RemoveContainer" containerID="a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.683498 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9"} err="failed to get container status \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": rpc error: code = NotFound desc = could not find container \"a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9\": container with ID starting with a4838c0d8b606c82b456c3074a97e1eea43a0b4ffa7b76fc5eaefee16d11a4c9 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.683518 4924 scope.go:117] "RemoveContainer" containerID="52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.683781 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde"} err="failed to get container status \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": rpc error: code = NotFound desc = could not find container \"52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde\": container with ID starting with 52aa103005c664f6fe1dbe9add7b643fb17f6352adbb85abc6f3409ab3afabde not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.683819 4924 scope.go:117] "RemoveContainer" containerID="5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.684049 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7"} err="failed to get container status \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": rpc error: code = NotFound desc = could not find container \"5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7\": container with ID starting with 5f678b03dec7d671ac8576c0a53e7d14a0441f186cc72208d8edc96d1aebe0a7 not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.684068 4924 scope.go:117] "RemoveContainer" containerID="ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.684311 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d"} err="failed to get container status \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": rpc error: code = NotFound desc = could not find container \"ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d\": container with ID starting with ed4574085eb7972cd667ac26794e9618b2514461e5af8da85cdba972e1d84c8d not found: ID does not exist" Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.769779 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jnlw"] Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.772985 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jnlw"] Dec 11 14:06:39 crc kubenswrapper[4924]: I1211 14:06:39.867768 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:39 crc kubenswrapper[4924]: W1211 14:06:39.882292 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25f46326_7b99_48d3_8877_3eb63321e63e.slice/crio-2cb0897499cab92c6dc78130b90faac5bb18cc3a4e50dbde3af4e43913c4f51b WatchSource:0}: Error finding container 2cb0897499cab92c6dc78130b90faac5bb18cc3a4e50dbde3af4e43913c4f51b: Status 404 returned error can't find the container with id 2cb0897499cab92c6dc78130b90faac5bb18cc3a4e50dbde3af4e43913c4f51b Dec 11 14:06:40 crc kubenswrapper[4924]: I1211 14:06:40.451575 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/2.log" Dec 11 14:06:40 crc kubenswrapper[4924]: I1211 14:06:40.452520 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/1.log" Dec 11 14:06:40 crc kubenswrapper[4924]: I1211 14:06:40.452687 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5vrtp" event={"ID":"5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c","Type":"ContainerStarted","Data":"774cd8f3199a146d4d0ddb7757a9120a22aeaedde326a51dc8ae8144f3e9ba89"} Dec 11 14:06:40 crc kubenswrapper[4924]: I1211 14:06:40.455146 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"177cdea64046afe24e7899646ba2a4b532583c706882e4a2cf6fd0b7bc95195c"} Dec 11 14:06:40 crc kubenswrapper[4924]: I1211 14:06:40.455214 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"2cb0897499cab92c6dc78130b90faac5bb18cc3a4e50dbde3af4e43913c4f51b"} Dec 11 14:06:40 crc kubenswrapper[4924]: I1211 14:06:40.796031 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47432eab-9072-43ce-9bf7-0dbd6fa271e7" path="/var/lib/kubelet/pods/47432eab-9072-43ce-9bf7-0dbd6fa271e7/volumes" Dec 11 14:06:41 crc kubenswrapper[4924]: I1211 14:06:41.465457 4924 generic.go:334] "Generic (PLEG): container finished" podID="25f46326-7b99-48d3-8877-3eb63321e63e" containerID="177cdea64046afe24e7899646ba2a4b532583c706882e4a2cf6fd0b7bc95195c" exitCode=0 Dec 11 14:06:41 crc kubenswrapper[4924]: I1211 14:06:41.465529 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerDied","Data":"177cdea64046afe24e7899646ba2a4b532583c706882e4a2cf6fd0b7bc95195c"} Dec 11 14:06:42 crc kubenswrapper[4924]: I1211 14:06:42.473184 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"d11f992ff44227f0e3068e9cd533b6e3bf6d671f4d239fdb8cc9a27aed6951dd"} Dec 11 14:06:43 crc kubenswrapper[4924]: I1211 14:06:43.489545 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"294414eaa63e26ff10f2824bf7012869600ee64591781a3dc09fd0fa6049aacf"} Dec 11 14:06:43 crc kubenswrapper[4924]: I1211 14:06:43.489735 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"b5caf66624d0cd480b64631c4802bdd37a79abac3958671cf2207d208b98b897"} Dec 11 14:06:43 crc kubenswrapper[4924]: I1211 14:06:43.489777 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"f206c2430eda3ecc0c147570b7c160b80d7090e87cca8997fbf2cd57ac8b5dab"} Dec 11 14:06:44 crc kubenswrapper[4924]: I1211 14:06:44.497539 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"47f8074499970838bb3fc6d5b074716352d367a1948a5e38666df6cb19529d3d"} Dec 11 14:06:44 crc kubenswrapper[4924]: I1211 14:06:44.497880 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"1174d1d2b653d99d28ddf10e1ddab97b7424d1480aece5f4b2b2744d41762174"} Dec 11 14:06:49 crc kubenswrapper[4924]: I1211 14:06:49.530038 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"ff8f49519890c51642fc74bac5c70bbaa2f150d4b3576e50163fe4607924ffd3"} Dec 11 14:06:51 crc kubenswrapper[4924]: I1211 14:06:51.561443 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" event={"ID":"25f46326-7b99-48d3-8877-3eb63321e63e","Type":"ContainerStarted","Data":"7fca8be4bfdd036cddbc0dbf672a283334d956410256ead6e63b4043f94fd448"} Dec 11 14:06:51 crc kubenswrapper[4924]: I1211 14:06:51.562502 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:51 crc kubenswrapper[4924]: I1211 14:06:51.562543 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:51 crc kubenswrapper[4924]: I1211 14:06:51.588600 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:51 crc kubenswrapper[4924]: I1211 14:06:51.628910 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" podStartSLOduration=12.628888025 podStartE2EDuration="12.628888025s" podCreationTimestamp="2025-12-11 14:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:06:51.60065637 +0000 UTC m=+825.110137357" watchObservedRunningTime="2025-12-11 14:06:51.628888025 +0000 UTC m=+825.138369002" Dec 11 14:06:52 crc kubenswrapper[4924]: I1211 14:06:52.565682 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:06:52 crc kubenswrapper[4924]: I1211 14:06:52.587902 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:07:09 crc kubenswrapper[4924]: I1211 14:07:09.891285 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-p9xnd" Dec 11 14:07:32 crc kubenswrapper[4924]: I1211 14:07:32.005449 4924 scope.go:117] "RemoveContainer" containerID="59c071031d1c84021ccd1f1785424e4b73db3f2127e07e06013078912841164c" Dec 11 14:07:32 crc kubenswrapper[4924]: I1211 14:07:32.811652 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5vrtp_5cbdb7db-1aa3-4cc6-8a0d-9461af8e1b8c/kube-multus/2.log" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.381597 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkhr4"] Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.382447 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wkhr4" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="registry-server" containerID="cri-o://48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d" gracePeriod=30 Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.742809 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.771960 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2g4g\" (UniqueName: \"kubernetes.io/projected/e6a6ca7a-2612-4a09-9d44-a364568ef20e-kube-api-access-r2g4g\") pod \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.772081 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-catalog-content\") pod \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.772126 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-utilities\") pod \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\" (UID: \"e6a6ca7a-2612-4a09-9d44-a364568ef20e\") " Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.773747 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-utilities" (OuterVolumeSpecName: "utilities") pod "e6a6ca7a-2612-4a09-9d44-a364568ef20e" (UID: "e6a6ca7a-2612-4a09-9d44-a364568ef20e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.779719 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a6ca7a-2612-4a09-9d44-a364568ef20e-kube-api-access-r2g4g" (OuterVolumeSpecName: "kube-api-access-r2g4g") pod "e6a6ca7a-2612-4a09-9d44-a364568ef20e" (UID: "e6a6ca7a-2612-4a09-9d44-a364568ef20e"). InnerVolumeSpecName "kube-api-access-r2g4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.800416 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6a6ca7a-2612-4a09-9d44-a364568ef20e" (UID: "e6a6ca7a-2612-4a09-9d44-a364568ef20e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.873481 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2g4g\" (UniqueName: \"kubernetes.io/projected/e6a6ca7a-2612-4a09-9d44-a364568ef20e-kube-api-access-r2g4g\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.873525 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:08 crc kubenswrapper[4924]: I1211 14:08:08.873539 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6ca7a-2612-4a09-9d44-a364568ef20e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.011648 4924 generic.go:334] "Generic (PLEG): container finished" podID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerID="48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d" exitCode=0 Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.011758 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerDied","Data":"48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d"} Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.011851 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wkhr4" event={"ID":"e6a6ca7a-2612-4a09-9d44-a364568ef20e","Type":"ContainerDied","Data":"fcfbcf57a238efcecf680aa049f2094e775f80d4c456294700cadbf237d32c8e"} Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.011894 4924 scope.go:117] "RemoveContainer" containerID="48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.012261 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wkhr4" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.027688 4924 scope.go:117] "RemoveContainer" containerID="f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.044915 4924 scope.go:117] "RemoveContainer" containerID="31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.075035 4924 scope.go:117] "RemoveContainer" containerID="48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d" Dec 11 14:08:09 crc kubenswrapper[4924]: E1211 14:08:09.075502 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d\": container with ID starting with 48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d not found: ID does not exist" containerID="48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.075540 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d"} err="failed to get container status \"48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d\": rpc error: code = NotFound desc = could not find container \"48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d\": container with ID starting with 48c5db3d2f07886ac7e56cd5ab635423ec8400c42a792440a0f3e9f06f11001d not found: ID does not exist" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.075567 4924 scope.go:117] "RemoveContainer" containerID="f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45" Dec 11 14:08:09 crc kubenswrapper[4924]: E1211 14:08:09.075867 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45\": container with ID starting with f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45 not found: ID does not exist" containerID="f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.075906 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45"} err="failed to get container status \"f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45\": rpc error: code = NotFound desc = could not find container \"f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45\": container with ID starting with f291d47c4ea3facc24d6fa836ad82a4ed2ee7919b1837333b0d4a22be73f3d45 not found: ID does not exist" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.075927 4924 scope.go:117] "RemoveContainer" containerID="31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696" Dec 11 14:08:09 crc kubenswrapper[4924]: E1211 14:08:09.076303 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696\": container with ID starting with 31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696 not found: ID does not exist" containerID="31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.076358 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696"} err="failed to get container status \"31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696\": rpc error: code = NotFound desc = could not find container \"31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696\": container with ID starting with 31fa62283490ef210972512dff4f7416643c1c3dbeee1528131732f51bdf1696 not found: ID does not exist" Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.085812 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkhr4"] Dec 11 14:08:09 crc kubenswrapper[4924]: I1211 14:08:09.089851 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wkhr4"] Dec 11 14:08:10 crc kubenswrapper[4924]: I1211 14:08:10.791161 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" path="/var/lib/kubelet/pods/e6a6ca7a-2612-4a09-9d44-a364568ef20e/volumes" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.653075 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n"] Dec 11 14:08:12 crc kubenswrapper[4924]: E1211 14:08:12.653365 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="registry-server" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.653382 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="registry-server" Dec 11 14:08:12 crc kubenswrapper[4924]: E1211 14:08:12.653395 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="extract-utilities" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.653404 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="extract-utilities" Dec 11 14:08:12 crc kubenswrapper[4924]: E1211 14:08:12.653425 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="extract-content" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.653434 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="extract-content" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.653553 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a6ca7a-2612-4a09-9d44-a364568ef20e" containerName="registry-server" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.654505 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.656813 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.662408 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n"] Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.718782 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.719002 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbgjn\" (UniqueName: \"kubernetes.io/projected/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-kube-api-access-sbgjn\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.719084 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.820578 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.820681 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.820719 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbgjn\" (UniqueName: \"kubernetes.io/projected/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-kube-api-access-sbgjn\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.821248 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.821277 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.841799 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbgjn\" (UniqueName: \"kubernetes.io/projected/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-kube-api-access-sbgjn\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:12 crc kubenswrapper[4924]: I1211 14:08:12.969166 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:13 crc kubenswrapper[4924]: I1211 14:08:13.173519 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n"] Dec 11 14:08:14 crc kubenswrapper[4924]: I1211 14:08:14.041768 4924 generic.go:334] "Generic (PLEG): container finished" podID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerID="e7df9f91b1ad10ea92cb296934fddd92da7330c0d4d3ffb9f66a19f2187c79fd" exitCode=0 Dec 11 14:08:14 crc kubenswrapper[4924]: I1211 14:08:14.041846 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" event={"ID":"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0","Type":"ContainerDied","Data":"e7df9f91b1ad10ea92cb296934fddd92da7330c0d4d3ffb9f66a19f2187c79fd"} Dec 11 14:08:14 crc kubenswrapper[4924]: I1211 14:08:14.043351 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" event={"ID":"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0","Type":"ContainerStarted","Data":"c08ec466d8ef9462f24ec033ab597ccaac81118b4e5d7cf499e381805813aa10"} Dec 11 14:08:14 crc kubenswrapper[4924]: I1211 14:08:14.045083 4924 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.414809 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zkkx"] Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.419024 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.428489 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zkkx"] Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.437230 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.437352 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.452723 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-catalog-content\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.452782 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5krg\" (UniqueName: \"kubernetes.io/projected/762cbd54-2e6f-4284-8ae5-f85244456687-kube-api-access-n5krg\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.452828 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-utilities\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.553816 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-catalog-content\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.553873 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5krg\" (UniqueName: \"kubernetes.io/projected/762cbd54-2e6f-4284-8ae5-f85244456687-kube-api-access-n5krg\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.553925 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-utilities\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.554368 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-catalog-content\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.554499 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-utilities\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.578469 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5krg\" (UniqueName: \"kubernetes.io/projected/762cbd54-2e6f-4284-8ae5-f85244456687-kube-api-access-n5krg\") pod \"redhat-operators-2zkkx\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.756302 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:15 crc kubenswrapper[4924]: I1211 14:08:15.942051 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zkkx"] Dec 11 14:08:15 crc kubenswrapper[4924]: W1211 14:08:15.946849 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod762cbd54_2e6f_4284_8ae5_f85244456687.slice/crio-f9ae5a468f46c43fdc6c2bbdfe69d918d68618cb2311add5fdeda3766b322c35 WatchSource:0}: Error finding container f9ae5a468f46c43fdc6c2bbdfe69d918d68618cb2311add5fdeda3766b322c35: Status 404 returned error can't find the container with id f9ae5a468f46c43fdc6c2bbdfe69d918d68618cb2311add5fdeda3766b322c35 Dec 11 14:08:16 crc kubenswrapper[4924]: I1211 14:08:16.054250 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zkkx" event={"ID":"762cbd54-2e6f-4284-8ae5-f85244456687","Type":"ContainerStarted","Data":"f9ae5a468f46c43fdc6c2bbdfe69d918d68618cb2311add5fdeda3766b322c35"} Dec 11 14:08:17 crc kubenswrapper[4924]: I1211 14:08:17.063006 4924 generic.go:334] "Generic (PLEG): container finished" podID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerID="a34937194638e097928d1599670d203f88a94d7f6254066e1992d67690e2c875" exitCode=0 Dec 11 14:08:17 crc kubenswrapper[4924]: I1211 14:08:17.063144 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" event={"ID":"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0","Type":"ContainerDied","Data":"a34937194638e097928d1599670d203f88a94d7f6254066e1992d67690e2c875"} Dec 11 14:08:17 crc kubenswrapper[4924]: I1211 14:08:17.065984 4924 generic.go:334] "Generic (PLEG): container finished" podID="762cbd54-2e6f-4284-8ae5-f85244456687" containerID="65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210" exitCode=0 Dec 11 14:08:17 crc kubenswrapper[4924]: I1211 14:08:17.066044 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zkkx" event={"ID":"762cbd54-2e6f-4284-8ae5-f85244456687","Type":"ContainerDied","Data":"65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210"} Dec 11 14:08:18 crc kubenswrapper[4924]: I1211 14:08:18.072373 4924 generic.go:334] "Generic (PLEG): container finished" podID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerID="648ec954025ac8312e9d092f873add4725d7123d1bae204dbd706c26bbd00016" exitCode=0 Dec 11 14:08:18 crc kubenswrapper[4924]: I1211 14:08:18.072412 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" event={"ID":"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0","Type":"ContainerDied","Data":"648ec954025ac8312e9d092f873add4725d7123d1bae204dbd706c26bbd00016"} Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.084150 4924 generic.go:334] "Generic (PLEG): container finished" podID="762cbd54-2e6f-4284-8ae5-f85244456687" containerID="75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90" exitCode=0 Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.084253 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zkkx" event={"ID":"762cbd54-2e6f-4284-8ae5-f85244456687","Type":"ContainerDied","Data":"75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90"} Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.371081 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.397850 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-util\") pod \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.397933 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-bundle\") pod \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.397970 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbgjn\" (UniqueName: \"kubernetes.io/projected/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-kube-api-access-sbgjn\") pod \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\" (UID: \"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0\") " Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.400007 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-bundle" (OuterVolumeSpecName: "bundle") pod "bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" (UID: "bbefe9ba-780c-43d7-9e6c-e0d204adb2f0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.403851 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-kube-api-access-sbgjn" (OuterVolumeSpecName: "kube-api-access-sbgjn") pod "bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" (UID: "bbefe9ba-780c-43d7-9e6c-e0d204adb2f0"). InnerVolumeSpecName "kube-api-access-sbgjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.408996 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-util" (OuterVolumeSpecName: "util") pod "bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" (UID: "bbefe9ba-780c-43d7-9e6c-e0d204adb2f0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.499230 4924 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.499278 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbgjn\" (UniqueName: \"kubernetes.io/projected/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-kube-api-access-sbgjn\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:19 crc kubenswrapper[4924]: I1211 14:08:19.499287 4924 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bbefe9ba-780c-43d7-9e6c-e0d204adb2f0-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:20 crc kubenswrapper[4924]: I1211 14:08:20.091113 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zkkx" event={"ID":"762cbd54-2e6f-4284-8ae5-f85244456687","Type":"ContainerStarted","Data":"179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f"} Dec 11 14:08:20 crc kubenswrapper[4924]: I1211 14:08:20.093755 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" event={"ID":"bbefe9ba-780c-43d7-9e6c-e0d204adb2f0","Type":"ContainerDied","Data":"c08ec466d8ef9462f24ec033ab597ccaac81118b4e5d7cf499e381805813aa10"} Dec 11 14:08:20 crc kubenswrapper[4924]: I1211 14:08:20.093786 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c08ec466d8ef9462f24ec033ab597ccaac81118b4e5d7cf499e381805813aa10" Dec 11 14:08:20 crc kubenswrapper[4924]: I1211 14:08:20.093851 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n" Dec 11 14:08:20 crc kubenswrapper[4924]: I1211 14:08:20.111788 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zkkx" podStartSLOduration=2.390922371 podStartE2EDuration="5.111771014s" podCreationTimestamp="2025-12-11 14:08:15 +0000 UTC" firstStartedPulling="2025-12-11 14:08:17.067723105 +0000 UTC m=+910.577204122" lastFinishedPulling="2025-12-11 14:08:19.788571778 +0000 UTC m=+913.298052765" observedRunningTime="2025-12-11 14:08:20.109960292 +0000 UTC m=+913.619441299" watchObservedRunningTime="2025-12-11 14:08:20.111771014 +0000 UTC m=+913.621251991" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.044379 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl"] Dec 11 14:08:22 crc kubenswrapper[4924]: E1211 14:08:22.044855 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="util" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.044868 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="util" Dec 11 14:08:22 crc kubenswrapper[4924]: E1211 14:08:22.044890 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="extract" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.044897 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="extract" Dec 11 14:08:22 crc kubenswrapper[4924]: E1211 14:08:22.044911 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="pull" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.044923 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="pull" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.045030 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbefe9ba-780c-43d7-9e6c-e0d204adb2f0" containerName="extract" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.045768 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.047837 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.056691 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl"] Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.127868 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.127924 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.128003 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrsg\" (UniqueName: \"kubernetes.io/projected/c2fc2981-ac1f-431c-8ffd-c07b443211af-kube-api-access-lnrsg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.229444 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrsg\" (UniqueName: \"kubernetes.io/projected/c2fc2981-ac1f-431c-8ffd-c07b443211af-kube-api-access-lnrsg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.229510 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.229549 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.230047 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.230386 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.258720 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrsg\" (UniqueName: \"kubernetes.io/projected/c2fc2981-ac1f-431c-8ffd-c07b443211af-kube-api-access-lnrsg\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.360756 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:22 crc kubenswrapper[4924]: I1211 14:08:22.526428 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl"] Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.110522 4924 generic.go:334] "Generic (PLEG): container finished" podID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerID="4ebfecf6785e9809f469175b4f6155de2105eab9b042931057aaca234efb95dc" exitCode=0 Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.110573 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" event={"ID":"c2fc2981-ac1f-431c-8ffd-c07b443211af","Type":"ContainerDied","Data":"4ebfecf6785e9809f469175b4f6155de2105eab9b042931057aaca234efb95dc"} Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.110604 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" event={"ID":"c2fc2981-ac1f-431c-8ffd-c07b443211af","Type":"ContainerStarted","Data":"455a07219bf722f37d7217a057a3ca8fe2b7b80ebd1df9db09d86051f04da7a6"} Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.472550 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs"] Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.473896 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.488102 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs"] Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.667787 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.667920 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.667948 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqsc\" (UniqueName: \"kubernetes.io/projected/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-kube-api-access-6wqsc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.769376 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.769551 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.770025 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.770025 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.769584 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wqsc\" (UniqueName: \"kubernetes.io/projected/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-kube-api-access-6wqsc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.788481 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wqsc\" (UniqueName: \"kubernetes.io/projected/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-kube-api-access-6wqsc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:23 crc kubenswrapper[4924]: I1211 14:08:23.800110 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:24 crc kubenswrapper[4924]: I1211 14:08:24.222449 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs"] Dec 11 14:08:24 crc kubenswrapper[4924]: W1211 14:08:24.223897 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod432f37b8_3eac_4e9a_bc87_fa34be6e9fbd.slice/crio-d0336c3b3d2a22bf4f9d03fe2e28e2e61ce420a860611f15a417f0a80a65706f WatchSource:0}: Error finding container d0336c3b3d2a22bf4f9d03fe2e28e2e61ce420a860611f15a417f0a80a65706f: Status 404 returned error can't find the container with id d0336c3b3d2a22bf4f9d03fe2e28e2e61ce420a860611f15a417f0a80a65706f Dec 11 14:08:25 crc kubenswrapper[4924]: I1211 14:08:25.121521 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" event={"ID":"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd","Type":"ContainerStarted","Data":"d0336c3b3d2a22bf4f9d03fe2e28e2e61ce420a860611f15a417f0a80a65706f"} Dec 11 14:08:25 crc kubenswrapper[4924]: I1211 14:08:25.757074 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:25 crc kubenswrapper[4924]: I1211 14:08:25.757138 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:26 crc kubenswrapper[4924]: I1211 14:08:26.128283 4924 generic.go:334] "Generic (PLEG): container finished" podID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerID="ae8666a40c265a9b997a060380b9809599194aa856241b62e275aa7e6d41212e" exitCode=0 Dec 11 14:08:26 crc kubenswrapper[4924]: I1211 14:08:26.128388 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" event={"ID":"c2fc2981-ac1f-431c-8ffd-c07b443211af","Type":"ContainerDied","Data":"ae8666a40c265a9b997a060380b9809599194aa856241b62e275aa7e6d41212e"} Dec 11 14:08:26 crc kubenswrapper[4924]: I1211 14:08:26.130034 4924 generic.go:334] "Generic (PLEG): container finished" podID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerID="ea96a67ce8521daadfed88090e9cb42c05a95aba7290fabdd0f39b63ceabec2c" exitCode=0 Dec 11 14:08:26 crc kubenswrapper[4924]: I1211 14:08:26.130157 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" event={"ID":"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd","Type":"ContainerDied","Data":"ea96a67ce8521daadfed88090e9cb42c05a95aba7290fabdd0f39b63ceabec2c"} Dec 11 14:08:26 crc kubenswrapper[4924]: I1211 14:08:26.838597 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2zkkx" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="registry-server" probeResult="failure" output=< Dec 11 14:08:26 crc kubenswrapper[4924]: timeout: failed to connect service ":50051" within 1s Dec 11 14:08:26 crc kubenswrapper[4924]: > Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.011569 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nshwf"] Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.012732 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.095399 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nshwf"] Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.108230 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-utilities\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.108275 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgdw\" (UniqueName: \"kubernetes.io/projected/95ee3e84-cb99-465d-83b1-89ea651b7a1e-kube-api-access-ptgdw\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.108523 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-catalog-content\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.138119 4924 generic.go:334] "Generic (PLEG): container finished" podID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerID="3b66372d007991a800b076d7782ff83d80000d3c3f22dc1b2a15ffa058c912d2" exitCode=0 Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.138158 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" event={"ID":"c2fc2981-ac1f-431c-8ffd-c07b443211af","Type":"ContainerDied","Data":"3b66372d007991a800b076d7782ff83d80000d3c3f22dc1b2a15ffa058c912d2"} Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.209888 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-utilities\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.209944 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgdw\" (UniqueName: \"kubernetes.io/projected/95ee3e84-cb99-465d-83b1-89ea651b7a1e-kube-api-access-ptgdw\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.209996 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-catalog-content\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.210442 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-utilities\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.210481 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-catalog-content\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.241479 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgdw\" (UniqueName: \"kubernetes.io/projected/95ee3e84-cb99-465d-83b1-89ea651b7a1e-kube-api-access-ptgdw\") pod \"certified-operators-nshwf\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.325605 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:27 crc kubenswrapper[4924]: I1211 14:08:27.804236 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nshwf"] Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.145618 4924 generic.go:334] "Generic (PLEG): container finished" podID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerID="15f0b447a84641a55f7943df6f96d7273c3d5994f950280a4e08ec385920ec0a" exitCode=0 Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.145801 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nshwf" event={"ID":"95ee3e84-cb99-465d-83b1-89ea651b7a1e","Type":"ContainerDied","Data":"15f0b447a84641a55f7943df6f96d7273c3d5994f950280a4e08ec385920ec0a"} Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.145963 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nshwf" event={"ID":"95ee3e84-cb99-465d-83b1-89ea651b7a1e","Type":"ContainerStarted","Data":"506ead83b08100977bf9086a34c1486164bfa27de3624245b179c8e0cd61e8e7"} Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.649811 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.836617 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnrsg\" (UniqueName: \"kubernetes.io/projected/c2fc2981-ac1f-431c-8ffd-c07b443211af-kube-api-access-lnrsg\") pod \"c2fc2981-ac1f-431c-8ffd-c07b443211af\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.837020 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-util\") pod \"c2fc2981-ac1f-431c-8ffd-c07b443211af\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.837071 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-bundle\") pod \"c2fc2981-ac1f-431c-8ffd-c07b443211af\" (UID: \"c2fc2981-ac1f-431c-8ffd-c07b443211af\") " Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.837984 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-bundle" (OuterVolumeSpecName: "bundle") pod "c2fc2981-ac1f-431c-8ffd-c07b443211af" (UID: "c2fc2981-ac1f-431c-8ffd-c07b443211af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.857706 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2fc2981-ac1f-431c-8ffd-c07b443211af-kube-api-access-lnrsg" (OuterVolumeSpecName: "kube-api-access-lnrsg") pod "c2fc2981-ac1f-431c-8ffd-c07b443211af" (UID: "c2fc2981-ac1f-431c-8ffd-c07b443211af"). InnerVolumeSpecName "kube-api-access-lnrsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.859942 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-util" (OuterVolumeSpecName: "util") pod "c2fc2981-ac1f-431c-8ffd-c07b443211af" (UID: "c2fc2981-ac1f-431c-8ffd-c07b443211af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.938095 4924 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.938132 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnrsg\" (UniqueName: \"kubernetes.io/projected/c2fc2981-ac1f-431c-8ffd-c07b443211af-kube-api-access-lnrsg\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:28 crc kubenswrapper[4924]: I1211 14:08:28.938147 4924 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2fc2981-ac1f-431c-8ffd-c07b443211af-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:29 crc kubenswrapper[4924]: I1211 14:08:29.154118 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" event={"ID":"c2fc2981-ac1f-431c-8ffd-c07b443211af","Type":"ContainerDied","Data":"455a07219bf722f37d7217a057a3ca8fe2b7b80ebd1df9db09d86051f04da7a6"} Dec 11 14:08:29 crc kubenswrapper[4924]: I1211 14:08:29.154826 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455a07219bf722f37d7217a057a3ca8fe2b7b80ebd1df9db09d86051f04da7a6" Dec 11 14:08:29 crc kubenswrapper[4924]: I1211 14:08:29.154177 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.171374 4924 generic.go:334] "Generic (PLEG): container finished" podID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerID="a44e96d6cd4b08df031a314a90fc7cd2ddd5d85cf93a5c49dbab687811f11700" exitCode=0 Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.171591 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nshwf" event={"ID":"95ee3e84-cb99-465d-83b1-89ea651b7a1e","Type":"ContainerDied","Data":"a44e96d6cd4b08df031a314a90fc7cd2ddd5d85cf93a5c49dbab687811f11700"} Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.823658 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr"] Dec 11 14:08:31 crc kubenswrapper[4924]: E1211 14:08:31.824195 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="pull" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.824218 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="pull" Dec 11 14:08:31 crc kubenswrapper[4924]: E1211 14:08:31.824258 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="util" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.824266 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="util" Dec 11 14:08:31 crc kubenswrapper[4924]: E1211 14:08:31.824281 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="extract" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.824303 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="extract" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.824561 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2fc2981-ac1f-431c-8ffd-c07b443211af" containerName="extract" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.825378 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.829856 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr"] Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.834047 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-t568r" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.834341 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.834394 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.913204 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n"] Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.913858 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.919599 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nsg4q" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.920438 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.935430 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n"] Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.946792 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6"] Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.947479 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.971085 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6"] Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.984931 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7015f850-c4bf-4212-b23d-4e14e2e8edb1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-tn75n\" (UID: \"7015f850-c4bf-4212-b23d-4e14e2e8edb1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.985004 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/847bf44c-6f92-49a3-8714-34558df6f0f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6\" (UID: \"847bf44c-6f92-49a3-8714-34558df6f0f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.985024 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vddr\" (UniqueName: \"kubernetes.io/projected/29650e77-3c2e-45da-bac3-f26fe39e95d9-kube-api-access-6vddr\") pod \"obo-prometheus-operator-668cf9dfbb-fdkqr\" (UID: \"29650e77-3c2e-45da-bac3-f26fe39e95d9\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.985048 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7015f850-c4bf-4212-b23d-4e14e2e8edb1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-tn75n\" (UID: \"7015f850-c4bf-4212-b23d-4e14e2e8edb1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:31 crc kubenswrapper[4924]: I1211 14:08:31.985105 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/847bf44c-6f92-49a3-8714-34558df6f0f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6\" (UID: \"847bf44c-6f92-49a3-8714-34558df6f0f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.086161 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/847bf44c-6f92-49a3-8714-34558df6f0f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6\" (UID: \"847bf44c-6f92-49a3-8714-34558df6f0f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.086239 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7015f850-c4bf-4212-b23d-4e14e2e8edb1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-tn75n\" (UID: \"7015f850-c4bf-4212-b23d-4e14e2e8edb1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.086281 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/847bf44c-6f92-49a3-8714-34558df6f0f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6\" (UID: \"847bf44c-6f92-49a3-8714-34558df6f0f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.086305 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vddr\" (UniqueName: \"kubernetes.io/projected/29650e77-3c2e-45da-bac3-f26fe39e95d9-kube-api-access-6vddr\") pod \"obo-prometheus-operator-668cf9dfbb-fdkqr\" (UID: \"29650e77-3c2e-45da-bac3-f26fe39e95d9\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.086361 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7015f850-c4bf-4212-b23d-4e14e2e8edb1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-tn75n\" (UID: \"7015f850-c4bf-4212-b23d-4e14e2e8edb1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.091967 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/847bf44c-6f92-49a3-8714-34558df6f0f7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6\" (UID: \"847bf44c-6f92-49a3-8714-34558df6f0f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.092035 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7015f850-c4bf-4212-b23d-4e14e2e8edb1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-tn75n\" (UID: \"7015f850-c4bf-4212-b23d-4e14e2e8edb1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.092294 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/847bf44c-6f92-49a3-8714-34558df6f0f7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6\" (UID: \"847bf44c-6f92-49a3-8714-34558df6f0f7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.097887 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7015f850-c4bf-4212-b23d-4e14e2e8edb1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6567b776c7-tn75n\" (UID: \"7015f850-c4bf-4212-b23d-4e14e2e8edb1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.106920 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vddr\" (UniqueName: \"kubernetes.io/projected/29650e77-3c2e-45da-bac3-f26fe39e95d9-kube-api-access-6vddr\") pod \"obo-prometheus-operator-668cf9dfbb-fdkqr\" (UID: \"29650e77-3c2e-45da-bac3-f26fe39e95d9\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.132749 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-b4c8b"] Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.133607 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.135817 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.136865 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-h9h5f" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.148720 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.151304 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-b4c8b"] Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.190203 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/226608b6-d0d0-419d-aa80-788bfe423da4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-b4c8b\" (UID: \"226608b6-d0d0-419d-aa80-788bfe423da4\") " pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.190271 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs428\" (UniqueName: \"kubernetes.io/projected/226608b6-d0d0-419d-aa80-788bfe423da4-kube-api-access-qs428\") pod \"observability-operator-d8bb48f5d-b4c8b\" (UID: \"226608b6-d0d0-419d-aa80-788bfe423da4\") " pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.234507 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.272069 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.292169 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs428\" (UniqueName: \"kubernetes.io/projected/226608b6-d0d0-419d-aa80-788bfe423da4-kube-api-access-qs428\") pod \"observability-operator-d8bb48f5d-b4c8b\" (UID: \"226608b6-d0d0-419d-aa80-788bfe423da4\") " pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.292316 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/226608b6-d0d0-419d-aa80-788bfe423da4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-b4c8b\" (UID: \"226608b6-d0d0-419d-aa80-788bfe423da4\") " pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.297056 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/226608b6-d0d0-419d-aa80-788bfe423da4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-b4c8b\" (UID: \"226608b6-d0d0-419d-aa80-788bfe423da4\") " pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.313937 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs428\" (UniqueName: \"kubernetes.io/projected/226608b6-d0d0-419d-aa80-788bfe423da4-kube-api-access-qs428\") pod \"observability-operator-d8bb48f5d-b4c8b\" (UID: \"226608b6-d0d0-419d-aa80-788bfe423da4\") " pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.329802 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-hg49r"] Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.330502 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.332860 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-lpp56" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.350727 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-hg49r"] Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.395267 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b4ea8b1-e726-4f02-aa91-f97dc1122eab-openshift-service-ca\") pod \"perses-operator-5446b9c989-hg49r\" (UID: \"2b4ea8b1-e726-4f02-aa91-f97dc1122eab\") " pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.396290 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfgnj\" (UniqueName: \"kubernetes.io/projected/2b4ea8b1-e726-4f02-aa91-f97dc1122eab-kube-api-access-rfgnj\") pod \"perses-operator-5446b9c989-hg49r\" (UID: \"2b4ea8b1-e726-4f02-aa91-f97dc1122eab\") " pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.456904 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.497904 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfgnj\" (UniqueName: \"kubernetes.io/projected/2b4ea8b1-e726-4f02-aa91-f97dc1122eab-kube-api-access-rfgnj\") pod \"perses-operator-5446b9c989-hg49r\" (UID: \"2b4ea8b1-e726-4f02-aa91-f97dc1122eab\") " pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.497993 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b4ea8b1-e726-4f02-aa91-f97dc1122eab-openshift-service-ca\") pod \"perses-operator-5446b9c989-hg49r\" (UID: \"2b4ea8b1-e726-4f02-aa91-f97dc1122eab\") " pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.498880 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b4ea8b1-e726-4f02-aa91-f97dc1122eab-openshift-service-ca\") pod \"perses-operator-5446b9c989-hg49r\" (UID: \"2b4ea8b1-e726-4f02-aa91-f97dc1122eab\") " pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.531430 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfgnj\" (UniqueName: \"kubernetes.io/projected/2b4ea8b1-e726-4f02-aa91-f97dc1122eab-kube-api-access-rfgnj\") pod \"perses-operator-5446b9c989-hg49r\" (UID: \"2b4ea8b1-e726-4f02-aa91-f97dc1122eab\") " pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:32 crc kubenswrapper[4924]: I1211 14:08:32.679362 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:33 crc kubenswrapper[4924]: I1211 14:08:33.406961 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6"] Dec 11 14:08:33 crc kubenswrapper[4924]: I1211 14:08:33.424232 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr"] Dec 11 14:08:33 crc kubenswrapper[4924]: I1211 14:08:33.467935 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-hg49r"] Dec 11 14:08:33 crc kubenswrapper[4924]: I1211 14:08:33.477768 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-b4c8b"] Dec 11 14:08:33 crc kubenswrapper[4924]: W1211 14:08:33.480099 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b4ea8b1_e726_4f02_aa91_f97dc1122eab.slice/crio-9850424081dd233d17998165cb4b3ca6667dd860f1070cb013611f226b1034d6 WatchSource:0}: Error finding container 9850424081dd233d17998165cb4b3ca6667dd860f1070cb013611f226b1034d6: Status 404 returned error can't find the container with id 9850424081dd233d17998165cb4b3ca6667dd860f1070cb013611f226b1034d6 Dec 11 14:08:33 crc kubenswrapper[4924]: I1211 14:08:33.510843 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n"] Dec 11 14:08:33 crc kubenswrapper[4924]: W1211 14:08:33.563713 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7015f850_c4bf_4212_b23d_4e14e2e8edb1.slice/crio-84833c4dd4308378f0bf0c343da0d22764e8f597d71850254cdfbb85baf0a873 WatchSource:0}: Error finding container 84833c4dd4308378f0bf0c343da0d22764e8f597d71850254cdfbb85baf0a873: Status 404 returned error can't find the container with id 84833c4dd4308378f0bf0c343da0d22764e8f597d71850254cdfbb85baf0a873 Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.207054 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-hg49r" event={"ID":"2b4ea8b1-e726-4f02-aa91-f97dc1122eab","Type":"ContainerStarted","Data":"9850424081dd233d17998165cb4b3ca6667dd860f1070cb013611f226b1034d6"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.208645 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nshwf" event={"ID":"95ee3e84-cb99-465d-83b1-89ea651b7a1e","Type":"ContainerStarted","Data":"41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.210125 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" event={"ID":"226608b6-d0d0-419d-aa80-788bfe423da4","Type":"ContainerStarted","Data":"7900b53f8ecd2a543270e3a70b330f864e01b3b0fc095f91ddfb0a16b594d022"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.211148 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" event={"ID":"847bf44c-6f92-49a3-8714-34558df6f0f7","Type":"ContainerStarted","Data":"5c71cfd76a2a54f9abfc10d17852e4a88908fd6ba14791f7d3c0450c6c2463d7"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.212150 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" event={"ID":"7015f850-c4bf-4212-b23d-4e14e2e8edb1","Type":"ContainerStarted","Data":"84833c4dd4308378f0bf0c343da0d22764e8f597d71850254cdfbb85baf0a873"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.214430 4924 generic.go:334] "Generic (PLEG): container finished" podID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerID="335c8048b332b1530500b31d9f94358f08648ea8f7b9be87cbbca80bbcebc29a" exitCode=0 Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.214501 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" event={"ID":"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd","Type":"ContainerDied","Data":"335c8048b332b1530500b31d9f94358f08648ea8f7b9be87cbbca80bbcebc29a"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.217786 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" event={"ID":"29650e77-3c2e-45da-bac3-f26fe39e95d9","Type":"ContainerStarted","Data":"fe45e3ebccc0e78002861a1676d73922cee5732862e759670f0481916cf0c5e8"} Dec 11 14:08:34 crc kubenswrapper[4924]: I1211 14:08:34.273713 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nshwf" podStartSLOduration=2.853086618 podStartE2EDuration="8.273687492s" podCreationTimestamp="2025-12-11 14:08:26 +0000 UTC" firstStartedPulling="2025-12-11 14:08:28.14750686 +0000 UTC m=+921.656987837" lastFinishedPulling="2025-12-11 14:08:33.568107744 +0000 UTC m=+927.077588711" observedRunningTime="2025-12-11 14:08:34.236211675 +0000 UTC m=+927.745692652" watchObservedRunningTime="2025-12-11 14:08:34.273687492 +0000 UTC m=+927.783168469" Dec 11 14:08:35 crc kubenswrapper[4924]: I1211 14:08:35.249879 4924 generic.go:334] "Generic (PLEG): container finished" podID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerID="4cf9b29247127a5054ce18cc74222c7c93e9230c2f8e83fe7e8eec43d3783012" exitCode=0 Dec 11 14:08:35 crc kubenswrapper[4924]: I1211 14:08:35.249984 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" event={"ID":"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd","Type":"ContainerDied","Data":"4cf9b29247127a5054ce18cc74222c7c93e9230c2f8e83fe7e8eec43d3783012"} Dec 11 14:08:35 crc kubenswrapper[4924]: I1211 14:08:35.816600 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:35 crc kubenswrapper[4924]: I1211 14:08:35.887599 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.627599 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.656706 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-util\") pod \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.656806 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-bundle\") pod \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.656850 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wqsc\" (UniqueName: \"kubernetes.io/projected/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-kube-api-access-6wqsc\") pod \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\" (UID: \"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd\") " Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.659538 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-bundle" (OuterVolumeSpecName: "bundle") pod "432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" (UID: "432f37b8-3eac-4e9a-bc87-fa34be6e9fbd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.668495 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-util" (OuterVolumeSpecName: "util") pod "432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" (UID: "432f37b8-3eac-4e9a-bc87-fa34be6e9fbd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.679924 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-kube-api-access-6wqsc" (OuterVolumeSpecName: "kube-api-access-6wqsc") pod "432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" (UID: "432f37b8-3eac-4e9a-bc87-fa34be6e9fbd"). InnerVolumeSpecName "kube-api-access-6wqsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.758650 4924 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.758681 4924 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:36 crc kubenswrapper[4924]: I1211 14:08:36.758692 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wqsc\" (UniqueName: \"kubernetes.io/projected/432f37b8-3eac-4e9a-bc87-fa34be6e9fbd-kube-api-access-6wqsc\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:37 crc kubenswrapper[4924]: I1211 14:08:37.286042 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" event={"ID":"432f37b8-3eac-4e9a-bc87-fa34be6e9fbd","Type":"ContainerDied","Data":"d0336c3b3d2a22bf4f9d03fe2e28e2e61ce420a860611f15a417f0a80a65706f"} Dec 11 14:08:37 crc kubenswrapper[4924]: I1211 14:08:37.286273 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0336c3b3d2a22bf4f9d03fe2e28e2e61ce420a860611f15a417f0a80a65706f" Dec 11 14:08:37 crc kubenswrapper[4924]: I1211 14:08:37.286306 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs" Dec 11 14:08:37 crc kubenswrapper[4924]: I1211 14:08:37.327039 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:37 crc kubenswrapper[4924]: I1211 14:08:37.327123 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:37 crc kubenswrapper[4924]: I1211 14:08:37.432417 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:38 crc kubenswrapper[4924]: I1211 14:08:38.403124 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.203817 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zkkx"] Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.204284 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2zkkx" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="registry-server" containerID="cri-o://179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f" gracePeriod=2 Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.603535 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.709292 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-catalog-content\") pod \"762cbd54-2e6f-4284-8ae5-f85244456687\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.709403 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-utilities\") pod \"762cbd54-2e6f-4284-8ae5-f85244456687\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.709500 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5krg\" (UniqueName: \"kubernetes.io/projected/762cbd54-2e6f-4284-8ae5-f85244456687-kube-api-access-n5krg\") pod \"762cbd54-2e6f-4284-8ae5-f85244456687\" (UID: \"762cbd54-2e6f-4284-8ae5-f85244456687\") " Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.711521 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-utilities" (OuterVolumeSpecName: "utilities") pod "762cbd54-2e6f-4284-8ae5-f85244456687" (UID: "762cbd54-2e6f-4284-8ae5-f85244456687"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.730492 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762cbd54-2e6f-4284-8ae5-f85244456687-kube-api-access-n5krg" (OuterVolumeSpecName: "kube-api-access-n5krg") pod "762cbd54-2e6f-4284-8ae5-f85244456687" (UID: "762cbd54-2e6f-4284-8ae5-f85244456687"). InnerVolumeSpecName "kube-api-access-n5krg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.810622 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.810658 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5krg\" (UniqueName: \"kubernetes.io/projected/762cbd54-2e6f-4284-8ae5-f85244456687-kube-api-access-n5krg\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.826285 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "762cbd54-2e6f-4284-8ae5-f85244456687" (UID: "762cbd54-2e6f-4284-8ae5-f85244456687"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:39 crc kubenswrapper[4924]: I1211 14:08:39.911794 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762cbd54-2e6f-4284-8ae5-f85244456687-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116371 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6bdd878b6d-5p5fw"] Dec 11 14:08:40 crc kubenswrapper[4924]: E1211 14:08:40.116586 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="extract" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116596 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="extract" Dec 11 14:08:40 crc kubenswrapper[4924]: E1211 14:08:40.116608 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="extract-content" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116614 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="extract-content" Dec 11 14:08:40 crc kubenswrapper[4924]: E1211 14:08:40.116632 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="extract-utilities" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116640 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="extract-utilities" Dec 11 14:08:40 crc kubenswrapper[4924]: E1211 14:08:40.116653 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="pull" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116661 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="pull" Dec 11 14:08:40 crc kubenswrapper[4924]: E1211 14:08:40.116672 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="util" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116679 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="util" Dec 11 14:08:40 crc kubenswrapper[4924]: E1211 14:08:40.116686 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="registry-server" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116693 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="registry-server" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116802 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="432f37b8-3eac-4e9a-bc87-fa34be6e9fbd" containerName="extract" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.116817 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" containerName="registry-server" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.117234 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.122021 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.122110 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-hjfh8" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.122117 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.122177 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.132892 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6bdd878b6d-5p5fw"] Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.214051 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b084842-0b17-427d-8035-aefbd150e92f-apiservice-cert\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.214130 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c488g\" (UniqueName: \"kubernetes.io/projected/7b084842-0b17-427d-8035-aefbd150e92f-kube-api-access-c488g\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.214159 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b084842-0b17-427d-8035-aefbd150e92f-webhook-cert\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.313579 4924 generic.go:334] "Generic (PLEG): container finished" podID="762cbd54-2e6f-4284-8ae5-f85244456687" containerID="179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f" exitCode=0 Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.313618 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zkkx" event={"ID":"762cbd54-2e6f-4284-8ae5-f85244456687","Type":"ContainerDied","Data":"179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f"} Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.313642 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zkkx" event={"ID":"762cbd54-2e6f-4284-8ae5-f85244456687","Type":"ContainerDied","Data":"f9ae5a468f46c43fdc6c2bbdfe69d918d68618cb2311add5fdeda3766b322c35"} Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.313659 4924 scope.go:117] "RemoveContainer" containerID="179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.313765 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zkkx" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.319775 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c488g\" (UniqueName: \"kubernetes.io/projected/7b084842-0b17-427d-8035-aefbd150e92f-kube-api-access-c488g\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.319814 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b084842-0b17-427d-8035-aefbd150e92f-webhook-cert\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.319858 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b084842-0b17-427d-8035-aefbd150e92f-apiservice-cert\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.328078 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b084842-0b17-427d-8035-aefbd150e92f-webhook-cert\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.335166 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b084842-0b17-427d-8035-aefbd150e92f-apiservice-cert\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.355929 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c488g\" (UniqueName: \"kubernetes.io/projected/7b084842-0b17-427d-8035-aefbd150e92f-kube-api-access-c488g\") pod \"elastic-operator-6bdd878b6d-5p5fw\" (UID: \"7b084842-0b17-427d-8035-aefbd150e92f\") " pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.412317 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zkkx"] Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.418210 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2zkkx"] Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.438276 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" Dec 11 14:08:40 crc kubenswrapper[4924]: I1211 14:08:40.793076 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762cbd54-2e6f-4284-8ae5-f85244456687" path="/var/lib/kubelet/pods/762cbd54-2e6f-4284-8ae5-f85244456687/volumes" Dec 11 14:08:41 crc kubenswrapper[4924]: I1211 14:08:41.798477 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nshwf"] Dec 11 14:08:41 crc kubenswrapper[4924]: I1211 14:08:41.798895 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nshwf" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="registry-server" containerID="cri-o://41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502" gracePeriod=2 Dec 11 14:08:42 crc kubenswrapper[4924]: I1211 14:08:42.332381 4924 generic.go:334] "Generic (PLEG): container finished" podID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerID="41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502" exitCode=0 Dec 11 14:08:42 crc kubenswrapper[4924]: I1211 14:08:42.332429 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nshwf" event={"ID":"95ee3e84-cb99-465d-83b1-89ea651b7a1e","Type":"ContainerDied","Data":"41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502"} Dec 11 14:08:45 crc kubenswrapper[4924]: I1211 14:08:45.433814 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:08:45 crc kubenswrapper[4924]: I1211 14:08:45.434169 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:08:47 crc kubenswrapper[4924]: E1211 14:08:47.327103 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502 is running failed: container process not found" containerID="41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:08:47 crc kubenswrapper[4924]: E1211 14:08:47.327821 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502 is running failed: container process not found" containerID="41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:08:47 crc kubenswrapper[4924]: E1211 14:08:47.331240 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502 is running failed: container process not found" containerID="41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:08:47 crc kubenswrapper[4924]: E1211 14:08:47.331307 4924 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-nshwf" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="registry-server" Dec 11 14:08:49 crc kubenswrapper[4924]: E1211 14:08:49.835604 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Dec 11 14:08:49 crc kubenswrapper[4924]: E1211 14:08:49.836005 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vddr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-fdkqr_openshift-operators(29650e77-3c2e-45da-bac3-f26fe39e95d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 14:08:49 crc kubenswrapper[4924]: E1211 14:08:49.838059 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" podUID="29650e77-3c2e-45da-bac3-f26fe39e95d9" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.375422 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" podUID="29650e77-3c2e-45da-bac3-f26fe39e95d9" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.432164 4924 scope.go:117] "RemoveContainer" containerID="75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.446627 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.446828 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-6567b776c7-tn75n_openshift-operators(7015f850-c4bf-4212-b23d-4e14e2e8edb1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.448051 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" podUID="7015f850-c4bf-4212-b23d-4e14e2e8edb1" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.460386 4924 scope.go:117] "RemoveContainer" containerID="65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.470061 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.470448 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6_openshift-operators(847bf44c-6f92-49a3-8714-34558df6f0f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.471649 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" podUID="847bf44c-6f92-49a3-8714-34558df6f0f7" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.517250 4924 scope.go:117] "RemoveContainer" containerID="179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.518045 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f\": container with ID starting with 179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f not found: ID does not exist" containerID="179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.518082 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f"} err="failed to get container status \"179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f\": rpc error: code = NotFound desc = could not find container \"179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f\": container with ID starting with 179dc1ff4588844cf8349c9062388ad8246f8c2d72e3cd6edd1eb06b4defc60f not found: ID does not exist" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.518102 4924 scope.go:117] "RemoveContainer" containerID="75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.518292 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90\": container with ID starting with 75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90 not found: ID does not exist" containerID="75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.518315 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90"} err="failed to get container status \"75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90\": rpc error: code = NotFound desc = could not find container \"75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90\": container with ID starting with 75c5d1b99318280256423935c7b89aed7818d9804bfec93443dbe4f38df31b90 not found: ID does not exist" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.518338 4924 scope.go:117] "RemoveContainer" containerID="65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210" Dec 11 14:08:50 crc kubenswrapper[4924]: E1211 14:08:50.518674 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210\": container with ID starting with 65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210 not found: ID does not exist" containerID="65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.518700 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210"} err="failed to get container status \"65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210\": rpc error: code = NotFound desc = could not find container \"65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210\": container with ID starting with 65d7a52fbbff47995a3837622826f46c0ca90dc5a31d77db510b058523a62210 not found: ID does not exist" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.521937 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.569663 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptgdw\" (UniqueName: \"kubernetes.io/projected/95ee3e84-cb99-465d-83b1-89ea651b7a1e-kube-api-access-ptgdw\") pod \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.569736 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-catalog-content\") pod \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.569763 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-utilities\") pod \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\" (UID: \"95ee3e84-cb99-465d-83b1-89ea651b7a1e\") " Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.583334 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-utilities" (OuterVolumeSpecName: "utilities") pod "95ee3e84-cb99-465d-83b1-89ea651b7a1e" (UID: "95ee3e84-cb99-465d-83b1-89ea651b7a1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.593248 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ee3e84-cb99-465d-83b1-89ea651b7a1e-kube-api-access-ptgdw" (OuterVolumeSpecName: "kube-api-access-ptgdw") pod "95ee3e84-cb99-465d-83b1-89ea651b7a1e" (UID: "95ee3e84-cb99-465d-83b1-89ea651b7a1e"). InnerVolumeSpecName "kube-api-access-ptgdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.633837 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95ee3e84-cb99-465d-83b1-89ea651b7a1e" (UID: "95ee3e84-cb99-465d-83b1-89ea651b7a1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.672341 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptgdw\" (UniqueName: \"kubernetes.io/projected/95ee3e84-cb99-465d-83b1-89ea651b7a1e-kube-api-access-ptgdw\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.672378 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.672390 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ee3e84-cb99-465d-83b1-89ea651b7a1e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:50 crc kubenswrapper[4924]: I1211 14:08:50.710076 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6bdd878b6d-5p5fw"] Dec 11 14:08:50 crc kubenswrapper[4924]: W1211 14:08:50.715948 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b084842_0b17_427d_8035_aefbd150e92f.slice/crio-934a49508ab10a5a7489b4f4c56076962d72a7eb35a5899d5a798fa9d458273a WatchSource:0}: Error finding container 934a49508ab10a5a7489b4f4c56076962d72a7eb35a5899d5a798fa9d458273a: Status 404 returned error can't find the container with id 934a49508ab10a5a7489b4f4c56076962d72a7eb35a5899d5a798fa9d458273a Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.381679 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" event={"ID":"7b084842-0b17-427d-8035-aefbd150e92f","Type":"ContainerStarted","Data":"934a49508ab10a5a7489b4f4c56076962d72a7eb35a5899d5a798fa9d458273a"} Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.384931 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-hg49r" event={"ID":"2b4ea8b1-e726-4f02-aa91-f97dc1122eab","Type":"ContainerStarted","Data":"76aa34b07b75fd45a4cdc2d66cef3f8de6fa0b4e4d53767dce3d207db31ae30a"} Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.385881 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.388502 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nshwf" event={"ID":"95ee3e84-cb99-465d-83b1-89ea651b7a1e","Type":"ContainerDied","Data":"506ead83b08100977bf9086a34c1486164bfa27de3624245b179c8e0cd61e8e7"} Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.388530 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nshwf" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.388554 4924 scope.go:117] "RemoveContainer" containerID="41712a5458f9254639f488a071356f3034de896f868eaa397fe537d65fd53502" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.390721 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" event={"ID":"226608b6-d0d0-419d-aa80-788bfe423da4","Type":"ContainerStarted","Data":"f2c2035c41da4018678a918ffa126d0f1b02e51568f6c054b1381b5665c7784f"} Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.391043 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:51 crc kubenswrapper[4924]: E1211 14:08:51.392786 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" podUID="847bf44c-6f92-49a3-8714-34558df6f0f7" Dec 11 14:08:51 crc kubenswrapper[4924]: E1211 14:08:51.392826 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" podUID="7015f850-c4bf-4212-b23d-4e14e2e8edb1" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.412587 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-hg49r" podStartSLOduration=2.419284699 podStartE2EDuration="19.412572298s" podCreationTimestamp="2025-12-11 14:08:32 +0000 UTC" firstStartedPulling="2025-12-11 14:08:33.484031709 +0000 UTC m=+926.993512686" lastFinishedPulling="2025-12-11 14:08:50.477319308 +0000 UTC m=+943.986800285" observedRunningTime="2025-12-11 14:08:51.409761948 +0000 UTC m=+944.919242925" watchObservedRunningTime="2025-12-11 14:08:51.412572298 +0000 UTC m=+944.922053275" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.414032 4924 scope.go:117] "RemoveContainer" containerID="a44e96d6cd4b08df031a314a90fc7cd2ddd5d85cf93a5c49dbab687811f11700" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.429537 4924 scope.go:117] "RemoveContainer" containerID="15f0b447a84641a55f7943df6f96d7273c3d5994f950280a4e08ec385920ec0a" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.430303 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.496748 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nshwf"] Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.505096 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nshwf"] Dec 11 14:08:51 crc kubenswrapper[4924]: I1211 14:08:51.532457 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-b4c8b" podStartSLOduration=2.482337385 podStartE2EDuration="19.532436352s" podCreationTimestamp="2025-12-11 14:08:32 +0000 UTC" firstStartedPulling="2025-12-11 14:08:33.477949626 +0000 UTC m=+926.987430603" lastFinishedPulling="2025-12-11 14:08:50.528048593 +0000 UTC m=+944.037529570" observedRunningTime="2025-12-11 14:08:51.530768675 +0000 UTC m=+945.040249662" watchObservedRunningTime="2025-12-11 14:08:51.532436352 +0000 UTC m=+945.041917329" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.794888 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" path="/var/lib/kubelet/pods/95ee3e84-cb99-465d-83b1-89ea651b7a1e/volumes" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.795705 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c"] Dec 11 14:08:52 crc kubenswrapper[4924]: E1211 14:08:52.796055 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="extract-utilities" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.796065 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="extract-utilities" Dec 11 14:08:52 crc kubenswrapper[4924]: E1211 14:08:52.796079 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="extract-content" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.796085 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="extract-content" Dec 11 14:08:52 crc kubenswrapper[4924]: E1211 14:08:52.796104 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="registry-server" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.796109 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="registry-server" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.796203 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ee3e84-cb99-465d-83b1-89ea651b7a1e" containerName="registry-server" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.796590 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.803851 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.803862 4924 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-4q7ll" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.804444 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c"] Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.804651 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.906435 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda77cf5-ec49-476e-bd43-a020acbc0dca-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-ff89c\" (UID: \"cda77cf5-ec49-476e-bd43-a020acbc0dca\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:52 crc kubenswrapper[4924]: I1211 14:08:52.906488 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrps5\" (UniqueName: \"kubernetes.io/projected/cda77cf5-ec49-476e-bd43-a020acbc0dca-kube-api-access-lrps5\") pod \"cert-manager-operator-controller-manager-5446d6888b-ff89c\" (UID: \"cda77cf5-ec49-476e-bd43-a020acbc0dca\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:53 crc kubenswrapper[4924]: I1211 14:08:53.011676 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda77cf5-ec49-476e-bd43-a020acbc0dca-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-ff89c\" (UID: \"cda77cf5-ec49-476e-bd43-a020acbc0dca\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:53 crc kubenswrapper[4924]: I1211 14:08:53.011754 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrps5\" (UniqueName: \"kubernetes.io/projected/cda77cf5-ec49-476e-bd43-a020acbc0dca-kube-api-access-lrps5\") pod \"cert-manager-operator-controller-manager-5446d6888b-ff89c\" (UID: \"cda77cf5-ec49-476e-bd43-a020acbc0dca\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:53 crc kubenswrapper[4924]: I1211 14:08:53.012720 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda77cf5-ec49-476e-bd43-a020acbc0dca-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-ff89c\" (UID: \"cda77cf5-ec49-476e-bd43-a020acbc0dca\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:53 crc kubenswrapper[4924]: I1211 14:08:53.046340 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrps5\" (UniqueName: \"kubernetes.io/projected/cda77cf5-ec49-476e-bd43-a020acbc0dca-kube-api-access-lrps5\") pod \"cert-manager-operator-controller-manager-5446d6888b-ff89c\" (UID: \"cda77cf5-ec49-476e-bd43-a020acbc0dca\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:53 crc kubenswrapper[4924]: I1211 14:08:53.113437 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" Dec 11 14:08:53 crc kubenswrapper[4924]: I1211 14:08:53.776268 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c"] Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.414601 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" event={"ID":"cda77cf5-ec49-476e-bd43-a020acbc0dca","Type":"ContainerStarted","Data":"3472d2b80d2f06920b1d654b68abe2d6ae6ea697efc09eb67cb5473b6a249e55"} Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.416025 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" event={"ID":"7b084842-0b17-427d-8035-aefbd150e92f","Type":"ContainerStarted","Data":"a4b3caa885cbc53b8a34b3a912025eb4510f8408fbbac0e2d6fd40611d14d3af"} Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.448821 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6bdd878b6d-5p5fw" podStartSLOduration=11.539306559 podStartE2EDuration="14.448795505s" podCreationTimestamp="2025-12-11 14:08:40 +0000 UTC" firstStartedPulling="2025-12-11 14:08:50.719701012 +0000 UTC m=+944.229181989" lastFinishedPulling="2025-12-11 14:08:53.629189968 +0000 UTC m=+947.138670935" observedRunningTime="2025-12-11 14:08:54.445401738 +0000 UTC m=+947.954882725" watchObservedRunningTime="2025-12-11 14:08:54.448795505 +0000 UTC m=+947.958276502" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.724098 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.738776 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.751145 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.751443 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.751585 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.751735 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.751848 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.751953 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.752472 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.752857 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.753818 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-7xpbt" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835291 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835369 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835390 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835406 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835425 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835460 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835481 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835510 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c00b0adf-9f8a-44cd-9ca7-381849854fdc-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835533 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835556 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835579 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835618 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835637 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835651 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835681 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.835994 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941153 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941203 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941244 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941311 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941359 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941383 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941405 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941431 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941479 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941507 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941542 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c00b0adf-9f8a-44cd-9ca7-381849854fdc-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941567 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941734 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.941786 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.942877 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.943164 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.943209 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.943249 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.943307 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.944587 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.945015 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.945044 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.946440 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.948277 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.948279 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.956058 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c00b0adf-9f8a-44cd-9ca7-381849854fdc-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.956083 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.958916 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.959319 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:54 crc kubenswrapper[4924]: I1211 14:08:54.960617 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c00b0adf-9f8a-44cd-9ca7-381849854fdc-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c00b0adf-9f8a-44cd-9ca7-381849854fdc\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:55 crc kubenswrapper[4924]: I1211 14:08:55.123910 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:08:55 crc kubenswrapper[4924]: I1211 14:08:55.341974 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 14:08:55 crc kubenswrapper[4924]: I1211 14:08:55.423032 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c00b0adf-9f8a-44cd-9ca7-381849854fdc","Type":"ContainerStarted","Data":"42f03d750e3641bc6d92bd3394e9f19a6fbc25b3d7a2bb6a5e68d1329942f283"} Dec 11 14:08:57 crc kubenswrapper[4924]: I1211 14:08:57.438375 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" event={"ID":"cda77cf5-ec49-476e-bd43-a020acbc0dca","Type":"ContainerStarted","Data":"5e19a87b57835cd8ccde1870aeb5137fe808e0549b1b7c4e88212c2ce9e072cd"} Dec 11 14:08:57 crc kubenswrapper[4924]: I1211 14:08:57.458639 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-ff89c" podStartSLOduration=2.438546703 podStartE2EDuration="5.458622699s" podCreationTimestamp="2025-12-11 14:08:52 +0000 UTC" firstStartedPulling="2025-12-11 14:08:53.786374226 +0000 UTC m=+947.295855193" lastFinishedPulling="2025-12-11 14:08:56.806450212 +0000 UTC m=+950.315931189" observedRunningTime="2025-12-11 14:08:57.455047417 +0000 UTC m=+950.964528394" watchObservedRunningTime="2025-12-11 14:08:57.458622699 +0000 UTC m=+950.968103676" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.439564 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w9dnq"] Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.440327 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.445804 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.451577 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w9dnq"] Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.452081 4924 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5lzpn" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.452084 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.525697 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn8qc\" (UniqueName: \"kubernetes.io/projected/4b21fe71-2162-433e-8080-333688ba4bea-kube-api-access-rn8qc\") pod \"cert-manager-webhook-f4fb5df64-w9dnq\" (UID: \"4b21fe71-2162-433e-8080-333688ba4bea\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.526012 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b21fe71-2162-433e-8080-333688ba4bea-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w9dnq\" (UID: \"4b21fe71-2162-433e-8080-333688ba4bea\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.626823 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn8qc\" (UniqueName: \"kubernetes.io/projected/4b21fe71-2162-433e-8080-333688ba4bea-kube-api-access-rn8qc\") pod \"cert-manager-webhook-f4fb5df64-w9dnq\" (UID: \"4b21fe71-2162-433e-8080-333688ba4bea\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.626879 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b21fe71-2162-433e-8080-333688ba4bea-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w9dnq\" (UID: \"4b21fe71-2162-433e-8080-333688ba4bea\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.648293 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b21fe71-2162-433e-8080-333688ba4bea-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w9dnq\" (UID: \"4b21fe71-2162-433e-8080-333688ba4bea\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.651262 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn8qc\" (UniqueName: \"kubernetes.io/projected/4b21fe71-2162-433e-8080-333688ba4bea-kube-api-access-rn8qc\") pod \"cert-manager-webhook-f4fb5df64-w9dnq\" (UID: \"4b21fe71-2162-433e-8080-333688ba4bea\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.761233 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:00 crc kubenswrapper[4924]: I1211 14:09:00.972306 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w9dnq"] Dec 11 14:09:01 crc kubenswrapper[4924]: I1211 14:09:01.459667 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" event={"ID":"4b21fe71-2162-433e-8080-333688ba4bea","Type":"ContainerStarted","Data":"c10fff01d45e7f7d572ad01b32c4d8f2e78a7f30fcdb3dcc60768c746a6171a3"} Dec 11 14:09:02 crc kubenswrapper[4924]: I1211 14:09:02.683049 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-hg49r" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.541752 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m"] Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.542474 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.545184 4924 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-m5zms" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.562192 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m"] Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.567912 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznbr\" (UniqueName: \"kubernetes.io/projected/6500304b-4ffe-41bb-9e9a-ecf681b14e63-kube-api-access-lznbr\") pod \"cert-manager-cainjector-855d9ccff4-8kr7m\" (UID: \"6500304b-4ffe-41bb-9e9a-ecf681b14e63\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.568090 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6500304b-4ffe-41bb-9e9a-ecf681b14e63-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-8kr7m\" (UID: \"6500304b-4ffe-41bb-9e9a-ecf681b14e63\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.669503 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6500304b-4ffe-41bb-9e9a-ecf681b14e63-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-8kr7m\" (UID: \"6500304b-4ffe-41bb-9e9a-ecf681b14e63\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.669602 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznbr\" (UniqueName: \"kubernetes.io/projected/6500304b-4ffe-41bb-9e9a-ecf681b14e63-kube-api-access-lznbr\") pod \"cert-manager-cainjector-855d9ccff4-8kr7m\" (UID: \"6500304b-4ffe-41bb-9e9a-ecf681b14e63\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.689600 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznbr\" (UniqueName: \"kubernetes.io/projected/6500304b-4ffe-41bb-9e9a-ecf681b14e63-kube-api-access-lznbr\") pod \"cert-manager-cainjector-855d9ccff4-8kr7m\" (UID: \"6500304b-4ffe-41bb-9e9a-ecf681b14e63\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.694915 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6500304b-4ffe-41bb-9e9a-ecf681b14e63-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-8kr7m\" (UID: \"6500304b-4ffe-41bb-9e9a-ecf681b14e63\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:03 crc kubenswrapper[4924]: I1211 14:09:03.857584 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.487591 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m"] Dec 11 14:09:10 crc kubenswrapper[4924]: W1211 14:09:10.501631 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6500304b_4ffe_41bb_9e9a_ecf681b14e63.slice/crio-c52273062321fb89685d904a93d789d2f926a4828bb909bb91c59b014474480f WatchSource:0}: Error finding container c52273062321fb89685d904a93d789d2f926a4828bb909bb91c59b014474480f: Status 404 returned error can't find the container with id c52273062321fb89685d904a93d789d2f926a4828bb909bb91c59b014474480f Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.523142 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" event={"ID":"6500304b-4ffe-41bb-9e9a-ecf681b14e63","Type":"ContainerStarted","Data":"c52273062321fb89685d904a93d789d2f926a4828bb909bb91c59b014474480f"} Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.686474 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7q8fg"] Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.688588 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.690478 4924 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lktdb" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.699143 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7q8fg"] Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.778847 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrc6\" (UniqueName: \"kubernetes.io/projected/400979a3-71d8-499d-8c9d-087f3f50bd16-kube-api-access-ltrc6\") pod \"cert-manager-86cb77c54b-7q8fg\" (UID: \"400979a3-71d8-499d-8c9d-087f3f50bd16\") " pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.779035 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/400979a3-71d8-499d-8c9d-087f3f50bd16-bound-sa-token\") pod \"cert-manager-86cb77c54b-7q8fg\" (UID: \"400979a3-71d8-499d-8c9d-087f3f50bd16\") " pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.880385 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltrc6\" (UniqueName: \"kubernetes.io/projected/400979a3-71d8-499d-8c9d-087f3f50bd16-kube-api-access-ltrc6\") pod \"cert-manager-86cb77c54b-7q8fg\" (UID: \"400979a3-71d8-499d-8c9d-087f3f50bd16\") " pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.880459 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/400979a3-71d8-499d-8c9d-087f3f50bd16-bound-sa-token\") pod \"cert-manager-86cb77c54b-7q8fg\" (UID: \"400979a3-71d8-499d-8c9d-087f3f50bd16\") " pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.898996 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/400979a3-71d8-499d-8c9d-087f3f50bd16-bound-sa-token\") pod \"cert-manager-86cb77c54b-7q8fg\" (UID: \"400979a3-71d8-499d-8c9d-087f3f50bd16\") " pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:10 crc kubenswrapper[4924]: I1211 14:09:10.900770 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltrc6\" (UniqueName: \"kubernetes.io/projected/400979a3-71d8-499d-8c9d-087f3f50bd16-kube-api-access-ltrc6\") pod \"cert-manager-86cb77c54b-7q8fg\" (UID: \"400979a3-71d8-499d-8c9d-087f3f50bd16\") " pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.008176 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-7q8fg" Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.243522 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7q8fg"] Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.544452 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" event={"ID":"29650e77-3c2e-45da-bac3-f26fe39e95d9","Type":"ContainerStarted","Data":"f45d0c9ec206b34f76e473be4e58e379d560a6c9ad8da69cf9f6a0bfe795e9de"} Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.550772 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" event={"ID":"847bf44c-6f92-49a3-8714-34558df6f0f7","Type":"ContainerStarted","Data":"23e576065f97a789690a5708d5beb0a1cbce2c19128c78f0f3bbac5608e5ed1d"} Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.558397 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-7q8fg" event={"ID":"400979a3-71d8-499d-8c9d-087f3f50bd16","Type":"ContainerStarted","Data":"0b26e11e8551c2937e5053ae6a7784fcd8456f055425e1593b6cac5d81d5214b"} Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.565633 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" event={"ID":"7015f850-c4bf-4212-b23d-4e14e2e8edb1","Type":"ContainerStarted","Data":"390d99ae3862e93cc7fdd4efa1de0ac271f95579e5faf1aa9a4cbd940d678c37"} Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.569169 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-fdkqr" podStartSLOduration=3.688724285 podStartE2EDuration="40.569144973s" podCreationTimestamp="2025-12-11 14:08:31 +0000 UTC" firstStartedPulling="2025-12-11 14:08:33.44438606 +0000 UTC m=+926.953867037" lastFinishedPulling="2025-12-11 14:09:10.324806748 +0000 UTC m=+963.834287725" observedRunningTime="2025-12-11 14:09:11.561785393 +0000 UTC m=+965.071266370" watchObservedRunningTime="2025-12-11 14:09:11.569144973 +0000 UTC m=+965.078625960" Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.571792 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c00b0adf-9f8a-44cd-9ca7-381849854fdc","Type":"ContainerStarted","Data":"adb951f984b12e83076b195250cbdb7e8fbc84235ad601adb1869d8d4d114e24"} Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.595270 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6" podStartSLOduration=3.693401409 podStartE2EDuration="40.595245407s" podCreationTimestamp="2025-12-11 14:08:31 +0000 UTC" firstStartedPulling="2025-12-11 14:08:33.416377172 +0000 UTC m=+926.925858149" lastFinishedPulling="2025-12-11 14:09:10.31822117 +0000 UTC m=+963.827702147" observedRunningTime="2025-12-11 14:09:11.589962556 +0000 UTC m=+965.099443523" watchObservedRunningTime="2025-12-11 14:09:11.595245407 +0000 UTC m=+965.104726384" Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.669444 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6567b776c7-tn75n" podStartSLOduration=3.969235235 podStartE2EDuration="40.66942608s" podCreationTimestamp="2025-12-11 14:08:31 +0000 UTC" firstStartedPulling="2025-12-11 14:08:33.568121934 +0000 UTC m=+927.077602911" lastFinishedPulling="2025-12-11 14:09:10.268312779 +0000 UTC m=+963.777793756" observedRunningTime="2025-12-11 14:09:11.619426005 +0000 UTC m=+965.128906982" watchObservedRunningTime="2025-12-11 14:09:11.66942608 +0000 UTC m=+965.178907057" Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.956551 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 14:09:11 crc kubenswrapper[4924]: I1211 14:09:11.989640 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 11 14:09:13 crc kubenswrapper[4924]: I1211 14:09:13.590224 4924 generic.go:334] "Generic (PLEG): container finished" podID="c00b0adf-9f8a-44cd-9ca7-381849854fdc" containerID="adb951f984b12e83076b195250cbdb7e8fbc84235ad601adb1869d8d4d114e24" exitCode=0 Dec 11 14:09:13 crc kubenswrapper[4924]: I1211 14:09:13.590344 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c00b0adf-9f8a-44cd-9ca7-381849854fdc","Type":"ContainerDied","Data":"adb951f984b12e83076b195250cbdb7e8fbc84235ad601adb1869d8d4d114e24"} Dec 11 14:09:15 crc kubenswrapper[4924]: I1211 14:09:15.434351 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:09:15 crc kubenswrapper[4924]: I1211 14:09:15.434603 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:09:15 crc kubenswrapper[4924]: I1211 14:09:15.434649 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:09:15 crc kubenswrapper[4924]: I1211 14:09:15.435186 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e4cea8eb422e0d935dde86db1abf1fbdf5c2a1faa07d50193b201cd5df925d4"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:09:15 crc kubenswrapper[4924]: I1211 14:09:15.435232 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://7e4cea8eb422e0d935dde86db1abf1fbdf5c2a1faa07d50193b201cd5df925d4" gracePeriod=600 Dec 11 14:09:17 crc kubenswrapper[4924]: I1211 14:09:17.640715 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="7e4cea8eb422e0d935dde86db1abf1fbdf5c2a1faa07d50193b201cd5df925d4" exitCode=0 Dec 11 14:09:17 crc kubenswrapper[4924]: I1211 14:09:17.640755 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"7e4cea8eb422e0d935dde86db1abf1fbdf5c2a1faa07d50193b201cd5df925d4"} Dec 11 14:09:17 crc kubenswrapper[4924]: I1211 14:09:17.640863 4924 scope.go:117] "RemoveContainer" containerID="0cfc937b59df394dc479967ee3b851d03bd936dae5a6426167548bc775a14bcd" Dec 11 14:09:26 crc kubenswrapper[4924]: E1211 14:09:26.963203 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 11 14:09:26 crc kubenswrapper[4924]: E1211 14:09:26.965557 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-cainjector,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/cainjector/cainjector],Args:[--leader-election-namespace=kube-system --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lznbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-cainjector-855d9ccff4-8kr7m_cert-manager(6500304b-4ffe-41bb-9e9a-ecf681b14e63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 14:09:26 crc kubenswrapper[4924]: E1211 14:09:26.968388 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" podUID="6500304b-4ffe-41bb-9e9a-ecf681b14e63" Dec 11 14:09:26 crc kubenswrapper[4924]: E1211 14:09:26.996078 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Dec 11 14:09:26 crc kubenswrapper[4924]: E1211 14:09:26.996311 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rn8qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-w9dnq_cert-manager(4b21fe71-2162-433e-8080-333688ba4bea): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 14:09:26 crc kubenswrapper[4924]: E1211 14:09:26.997992 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" podUID="4b21fe71-2162-433e-8080-333688ba4bea" Dec 11 14:09:27 crc kubenswrapper[4924]: I1211 14:09:27.692139 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-7q8fg" event={"ID":"400979a3-71d8-499d-8c9d-087f3f50bd16","Type":"ContainerStarted","Data":"073be95784365c708f4aa435456a0c670d6b865d536ec0278fe27627e96e39b8"} Dec 11 14:09:27 crc kubenswrapper[4924]: I1211 14:09:27.694481 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"93e4fd4fa7a0ea185c1b0a02c76e4148f87fb1524a936a5e232d6e0e38f7bfdc"} Dec 11 14:09:27 crc kubenswrapper[4924]: I1211 14:09:27.696794 4924 generic.go:334] "Generic (PLEG): container finished" podID="c00b0adf-9f8a-44cd-9ca7-381849854fdc" containerID="ac40cfb9f0fc1c4f5492c9497573f467a60b8fb0a4acb24a3c03a06eb8804404" exitCode=0 Dec 11 14:09:27 crc kubenswrapper[4924]: I1211 14:09:27.697823 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c00b0adf-9f8a-44cd-9ca7-381849854fdc","Type":"ContainerDied","Data":"ac40cfb9f0fc1c4f5492c9497573f467a60b8fb0a4acb24a3c03a06eb8804404"} Dec 11 14:09:27 crc kubenswrapper[4924]: I1211 14:09:27.723568 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-7q8fg" podStartSLOduration=1.930695351 podStartE2EDuration="17.723550326s" podCreationTimestamp="2025-12-11 14:09:10 +0000 UTC" firstStartedPulling="2025-12-11 14:09:11.254776058 +0000 UTC m=+964.764257045" lastFinishedPulling="2025-12-11 14:09:27.047631043 +0000 UTC m=+980.557112020" observedRunningTime="2025-12-11 14:09:27.722598669 +0000 UTC m=+981.232079646" watchObservedRunningTime="2025-12-11 14:09:27.723550326 +0000 UTC m=+981.233031303" Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.703075 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" event={"ID":"4b21fe71-2162-433e-8080-333688ba4bea","Type":"ContainerStarted","Data":"6d1bf2a24e30550835246cf7ac51011b18abb2862103b481439d1fdd4de5e42a"} Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.703794 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.706105 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" event={"ID":"6500304b-4ffe-41bb-9e9a-ecf681b14e63","Type":"ContainerStarted","Data":"b3c323293dd8f10369b1eb4a3dbbe8411f2726a3e746391af0e1e9ee83db1160"} Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.708689 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c00b0adf-9f8a-44cd-9ca7-381849854fdc","Type":"ContainerStarted","Data":"03063dd974996c7c6d55a22665249d20d3c3da3fe08b86d9cbc731dd7281ddb2"} Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.722435 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" podStartSLOduration=-9223372008.132362 podStartE2EDuration="28.722413289s" podCreationTimestamp="2025-12-11 14:09:00 +0000 UTC" firstStartedPulling="2025-12-11 14:09:00.989044402 +0000 UTC m=+954.498525379" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:28.717704995 +0000 UTC m=+982.227185972" watchObservedRunningTime="2025-12-11 14:09:28.722413289 +0000 UTC m=+982.231894266" Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.749491 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=19.686391482 podStartE2EDuration="34.74947229s" podCreationTimestamp="2025-12-11 14:08:54 +0000 UTC" firstStartedPulling="2025-12-11 14:08:55.355316276 +0000 UTC m=+948.864797243" lastFinishedPulling="2025-12-11 14:09:10.418397074 +0000 UTC m=+963.927878051" observedRunningTime="2025-12-11 14:09:28.748231184 +0000 UTC m=+982.257712161" watchObservedRunningTime="2025-12-11 14:09:28.74947229 +0000 UTC m=+982.258953267" Dec 11 14:09:28 crc kubenswrapper[4924]: I1211 14:09:28.771477 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-8kr7m" podStartSLOduration=-9223372011.08332 podStartE2EDuration="25.771455316s" podCreationTimestamp="2025-12-11 14:09:03 +0000 UTC" firstStartedPulling="2025-12-11 14:09:10.505381702 +0000 UTC m=+964.014862679" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:28.766315399 +0000 UTC m=+982.275796376" watchObservedRunningTime="2025-12-11 14:09:28.771455316 +0000 UTC m=+982.280936293" Dec 11 14:09:29 crc kubenswrapper[4924]: I1211 14:09:29.719865 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:09:35 crc kubenswrapper[4924]: I1211 14:09:35.764148 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-w9dnq" Dec 11 14:09:40 crc kubenswrapper[4924]: I1211 14:09:40.230232 4924 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="c00b0adf-9f8a-44cd-9ca7-381849854fdc" containerName="elasticsearch" probeResult="failure" output=< Dec 11 14:09:40 crc kubenswrapper[4924]: {"timestamp": "2025-12-11T14:09:40+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 11 14:09:40 crc kubenswrapper[4924]: > Dec 11 14:09:45 crc kubenswrapper[4924]: I1211 14:09:45.436476 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.930864 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.936138 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.942710 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ctnzv" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.942724 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.943171 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.943654 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.948748 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Dec 11 14:09:48 crc kubenswrapper[4924]: I1211 14:09:48.967578 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.018943 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019028 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019206 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5cr\" (UniqueName: \"kubernetes.io/projected/8ba0a1cf-b779-4907-910e-5b179023d3f5-kube-api-access-fg5cr\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019347 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019457 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019522 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ctnzv-push\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019546 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019568 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019584 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019618 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ctnzv-pull\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019638 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019656 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.019723 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.120936 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121000 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ctnzv-push\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121019 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121050 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121068 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121084 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ctnzv-pull\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121102 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121121 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121157 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121181 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121195 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121462 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121484 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121614 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121220 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121728 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5cr\" (UniqueName: \"kubernetes.io/projected/8ba0a1cf-b779-4907-910e-5b179023d3f5-kube-api-access-fg5cr\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121772 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.121947 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.122565 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.122727 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.122789 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.123381 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.128392 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.128793 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ctnzv-push\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.143162 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ctnzv-pull\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.143237 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5cr\" (UniqueName: \"kubernetes.io/projected/8ba0a1cf-b779-4907-910e-5b179023d3f5-kube-api-access-fg5cr\") pod \"service-telemetry-framework-index-1-build\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.271177 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.714684 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Dec 11 14:09:49 crc kubenswrapper[4924]: I1211 14:09:49.833919 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerStarted","Data":"ad1271251b40aa3865d406104340645c6ff878d340cc5b1f9d4ab87f5e20b358"} Dec 11 14:09:55 crc kubenswrapper[4924]: I1211 14:09:55.874700 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerStarted","Data":"244ad0037c69134d87056cb370d73d96677bbbf395c067f8993b5520906bd7e7"} Dec 11 14:09:56 crc kubenswrapper[4924]: I1211 14:09:56.883878 4924 generic.go:334] "Generic (PLEG): container finished" podID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerID="244ad0037c69134d87056cb370d73d96677bbbf395c067f8993b5520906bd7e7" exitCode=0 Dec 11 14:09:56 crc kubenswrapper[4924]: I1211 14:09:56.883921 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerDied","Data":"244ad0037c69134d87056cb370d73d96677bbbf395c067f8993b5520906bd7e7"} Dec 11 14:09:58 crc kubenswrapper[4924]: E1211 14:09:58.857904 4924 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba0a1cf_b779_4907_910e_5b179023d3f5.slice/crio-conmon-8ec2c78b774381faf864249fcccb8a460831bb328908f00638f3dadf048d03da.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba0a1cf_b779_4907_910e_5b179023d3f5.slice/crio-8ec2c78b774381faf864249fcccb8a460831bb328908f00638f3dadf048d03da.scope\": RecentStats: unable to find data in memory cache]" Dec 11 14:09:58 crc kubenswrapper[4924]: I1211 14:09:58.898991 4924 generic.go:334] "Generic (PLEG): container finished" podID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerID="8ec2c78b774381faf864249fcccb8a460831bb328908f00638f3dadf048d03da" exitCode=0 Dec 11 14:09:58 crc kubenswrapper[4924]: I1211 14:09:58.899053 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerDied","Data":"8ec2c78b774381faf864249fcccb8a460831bb328908f00638f3dadf048d03da"} Dec 11 14:09:58 crc kubenswrapper[4924]: I1211 14:09:58.947933 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_8ba0a1cf-b779-4907-910e-5b179023d3f5/manage-dockerfile/0.log" Dec 11 14:09:59 crc kubenswrapper[4924]: I1211 14:09:59.909612 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerStarted","Data":"ac4483191140b36e6e21adad1d423bf57f84c420f21ff25934c4b1eb2bc00f2c"} Dec 11 14:09:59 crc kubenswrapper[4924]: I1211 14:09:59.944258 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=6.394614844 podStartE2EDuration="11.944238804s" podCreationTimestamp="2025-12-11 14:09:48 +0000 UTC" firstStartedPulling="2025-12-11 14:09:49.725305651 +0000 UTC m=+1003.234786638" lastFinishedPulling="2025-12-11 14:09:55.274929621 +0000 UTC m=+1008.784410598" observedRunningTime="2025-12-11 14:09:59.942239307 +0000 UTC m=+1013.451720294" watchObservedRunningTime="2025-12-11 14:09:59.944238804 +0000 UTC m=+1013.453719781" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.821766 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d5mnp"] Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.824733 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.835644 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5mnp"] Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.878428 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdfn\" (UniqueName: \"kubernetes.io/projected/50c9d6a5-ddcf-4dd4-8330-671891c95e21-kube-api-access-kqdfn\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.878596 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-utilities\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.878655 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-catalog-content\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.979576 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdfn\" (UniqueName: \"kubernetes.io/projected/50c9d6a5-ddcf-4dd4-8330-671891c95e21-kube-api-access-kqdfn\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.979944 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-utilities\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.979974 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-catalog-content\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.980379 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-utilities\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:34 crc kubenswrapper[4924]: I1211 14:10:34.980431 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-catalog-content\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:35 crc kubenswrapper[4924]: I1211 14:10:35.010699 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdfn\" (UniqueName: \"kubernetes.io/projected/50c9d6a5-ddcf-4dd4-8330-671891c95e21-kube-api-access-kqdfn\") pod \"community-operators-d5mnp\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:35 crc kubenswrapper[4924]: I1211 14:10:35.146359 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:35 crc kubenswrapper[4924]: I1211 14:10:35.477495 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d5mnp"] Dec 11 14:10:36 crc kubenswrapper[4924]: I1211 14:10:36.150932 4924 generic.go:334] "Generic (PLEG): container finished" podID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerID="c6379ab249b6fdbda498b95195a06b9ec9091de9b51d7558ace6cc30fdf45083" exitCode=0 Dec 11 14:10:36 crc kubenswrapper[4924]: I1211 14:10:36.150976 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerDied","Data":"c6379ab249b6fdbda498b95195a06b9ec9091de9b51d7558ace6cc30fdf45083"} Dec 11 14:10:36 crc kubenswrapper[4924]: I1211 14:10:36.151225 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerStarted","Data":"f1b331f98152db2fd90ee312d543e94587d14a21a44b9e0b58fb89dbff4b4abf"} Dec 11 14:10:37 crc kubenswrapper[4924]: I1211 14:10:37.163945 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerStarted","Data":"14699825d71bc2194e636cde0944735ba3d1250b6c5aaa57f8959ce119fe7f35"} Dec 11 14:10:38 crc kubenswrapper[4924]: I1211 14:10:38.172816 4924 generic.go:334] "Generic (PLEG): container finished" podID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerID="14699825d71bc2194e636cde0944735ba3d1250b6c5aaa57f8959ce119fe7f35" exitCode=0 Dec 11 14:10:38 crc kubenswrapper[4924]: I1211 14:10:38.172996 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerDied","Data":"14699825d71bc2194e636cde0944735ba3d1250b6c5aaa57f8959ce119fe7f35"} Dec 11 14:10:39 crc kubenswrapper[4924]: I1211 14:10:39.184656 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerStarted","Data":"bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5"} Dec 11 14:10:39 crc kubenswrapper[4924]: I1211 14:10:39.211141 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d5mnp" podStartSLOduration=2.5520186860000003 podStartE2EDuration="5.211120303s" podCreationTimestamp="2025-12-11 14:10:34 +0000 UTC" firstStartedPulling="2025-12-11 14:10:36.152835049 +0000 UTC m=+1049.662316026" lastFinishedPulling="2025-12-11 14:10:38.811936666 +0000 UTC m=+1052.321417643" observedRunningTime="2025-12-11 14:10:39.205859163 +0000 UTC m=+1052.715340160" watchObservedRunningTime="2025-12-11 14:10:39.211120303 +0000 UTC m=+1052.720601280" Dec 11 14:10:45 crc kubenswrapper[4924]: I1211 14:10:45.147376 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:45 crc kubenswrapper[4924]: I1211 14:10:45.147987 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:45 crc kubenswrapper[4924]: I1211 14:10:45.188226 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:45 crc kubenswrapper[4924]: I1211 14:10:45.271469 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:45 crc kubenswrapper[4924]: I1211 14:10:45.429350 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5mnp"] Dec 11 14:10:47 crc kubenswrapper[4924]: I1211 14:10:47.255365 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d5mnp" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="registry-server" containerID="cri-o://bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5" gracePeriod=2 Dec 11 14:10:55 crc kubenswrapper[4924]: E1211 14:10:55.147210 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5 is running failed: container process not found" containerID="bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:10:55 crc kubenswrapper[4924]: E1211 14:10:55.149107 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5 is running failed: container process not found" containerID="bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:10:55 crc kubenswrapper[4924]: E1211 14:10:55.149443 4924 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5 is running failed: container process not found" containerID="bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5" cmd=["grpc_health_probe","-addr=:50051"] Dec 11 14:10:55 crc kubenswrapper[4924]: E1211 14:10:55.149483 4924 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-d5mnp" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="registry-server" Dec 11 14:10:56 crc kubenswrapper[4924]: I1211 14:10:56.015737 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-d5mnp_50c9d6a5-ddcf-4dd4-8330-671891c95e21/registry-server/0.log" Dec 11 14:10:56 crc kubenswrapper[4924]: I1211 14:10:56.016510 4924 generic.go:334] "Generic (PLEG): container finished" podID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerID="bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5" exitCode=-1 Dec 11 14:10:56 crc kubenswrapper[4924]: I1211 14:10:56.016540 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerDied","Data":"bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5"} Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.665235 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.786345 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-utilities\") pod \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.786404 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdfn\" (UniqueName: \"kubernetes.io/projected/50c9d6a5-ddcf-4dd4-8330-671891c95e21-kube-api-access-kqdfn\") pod \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.786431 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-catalog-content\") pod \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\" (UID: \"50c9d6a5-ddcf-4dd4-8330-671891c95e21\") " Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.787269 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-utilities" (OuterVolumeSpecName: "utilities") pod "50c9d6a5-ddcf-4dd4-8330-671891c95e21" (UID: "50c9d6a5-ddcf-4dd4-8330-671891c95e21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.792712 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c9d6a5-ddcf-4dd4-8330-671891c95e21-kube-api-access-kqdfn" (OuterVolumeSpecName: "kube-api-access-kqdfn") pod "50c9d6a5-ddcf-4dd4-8330-671891c95e21" (UID: "50c9d6a5-ddcf-4dd4-8330-671891c95e21"). InnerVolumeSpecName "kube-api-access-kqdfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.888025 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:58 crc kubenswrapper[4924]: I1211 14:10:58.888195 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqdfn\" (UniqueName: \"kubernetes.io/projected/50c9d6a5-ddcf-4dd4-8330-671891c95e21-kube-api-access-kqdfn\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.037925 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d5mnp" event={"ID":"50c9d6a5-ddcf-4dd4-8330-671891c95e21","Type":"ContainerDied","Data":"f1b331f98152db2fd90ee312d543e94587d14a21a44b9e0b58fb89dbff4b4abf"} Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.037973 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d5mnp" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.037996 4924 scope.go:117] "RemoveContainer" containerID="bd22cde612bc2e098fbfaca445eb5b830ffbe638210a6cf0b926fa7ee4cfc2b5" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.058219 4924 scope.go:117] "RemoveContainer" containerID="14699825d71bc2194e636cde0944735ba3d1250b6c5aaa57f8959ce119fe7f35" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.074387 4924 scope.go:117] "RemoveContainer" containerID="c6379ab249b6fdbda498b95195a06b9ec9091de9b51d7558ace6cc30fdf45083" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.092864 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50c9d6a5-ddcf-4dd4-8330-671891c95e21" (UID: "50c9d6a5-ddcf-4dd4-8330-671891c95e21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.191534 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50c9d6a5-ddcf-4dd4-8330-671891c95e21-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.364095 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d5mnp"] Dec 11 14:10:59 crc kubenswrapper[4924]: I1211 14:10:59.368427 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d5mnp"] Dec 11 14:11:00 crc kubenswrapper[4924]: I1211 14:11:00.792265 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" path="/var/lib/kubelet/pods/50c9d6a5-ddcf-4dd4-8330-671891c95e21/volumes" Dec 11 14:11:14 crc kubenswrapper[4924]: I1211 14:11:14.122249 4924 generic.go:334] "Generic (PLEG): container finished" podID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerID="ac4483191140b36e6e21adad1d423bf57f84c420f21ff25934c4b1eb2bc00f2c" exitCode=0 Dec 11 14:11:14 crc kubenswrapper[4924]: I1211 14:11:14.122531 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerDied","Data":"ac4483191140b36e6e21adad1d423bf57f84c420f21ff25934c4b1eb2bc00f2c"} Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.348847 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524476 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-system-configs\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524542 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-blob-cache\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524572 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-ca-bundles\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524591 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildworkdir\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524621 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ctnzv-pull\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-pull\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524640 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg5cr\" (UniqueName: \"kubernetes.io/projected/8ba0a1cf-b779-4907-910e-5b179023d3f5-kube-api-access-fg5cr\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524681 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-root\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524726 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-proxy-ca-bundles\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524743 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildcachedir\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524768 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-node-pullsecrets\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524805 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ctnzv-push\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-push\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524820 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-run\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.524842 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"8ba0a1cf-b779-4907-910e-5b179023d3f5\" (UID: \"8ba0a1cf-b779-4907-910e-5b179023d3f5\") " Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.525278 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.525370 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.525401 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.525767 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.525956 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.526516 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.526902 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.530278 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.530486 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba0a1cf-b779-4907-910e-5b179023d3f5-kube-api-access-fg5cr" (OuterVolumeSpecName: "kube-api-access-fg5cr") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "kube-api-access-fg5cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.540869 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-pull" (OuterVolumeSpecName: "builder-dockercfg-ctnzv-pull") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "builder-dockercfg-ctnzv-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.540905 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-push" (OuterVolumeSpecName: "builder-dockercfg-ctnzv-push") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "builder-dockercfg-ctnzv-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626580 4924 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626619 4924 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildcachedir\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626631 4924 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8ba0a1cf-b779-4907-910e-5b179023d3f5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626640 4924 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ctnzv-push\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-push\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626651 4924 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626663 4924 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626675 4924 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-system-configs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626685 4924 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626694 4924 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-buildworkdir\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626703 4924 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ctnzv-pull\" (UniqueName: \"kubernetes.io/secret/8ba0a1cf-b779-4907-910e-5b179023d3f5-builder-dockercfg-ctnzv-pull\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.626712 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg5cr\" (UniqueName: \"kubernetes.io/projected/8ba0a1cf-b779-4907-910e-5b179023d3f5-kube-api-access-fg5cr\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.712915 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:15 crc kubenswrapper[4924]: I1211 14:11:15.728126 4924 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.138224 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"8ba0a1cf-b779-4907-910e-5b179023d3f5","Type":"ContainerDied","Data":"ad1271251b40aa3865d406104340645c6ff878d340cc5b1f9d4ab87f5e20b358"} Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.138271 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad1271251b40aa3865d406104340645c6ff878d340cc5b1f9d4ab87f5e20b358" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.138291 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.492074 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-wp7xt"] Dec 11 14:11:16 crc kubenswrapper[4924]: E1211 14:11:16.493279 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="docker-build" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493299 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="docker-build" Dec 11 14:11:16 crc kubenswrapper[4924]: E1211 14:11:16.493314 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="extract-utilities" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493342 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="extract-utilities" Dec 11 14:11:16 crc kubenswrapper[4924]: E1211 14:11:16.493354 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="registry-server" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493361 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="registry-server" Dec 11 14:11:16 crc kubenswrapper[4924]: E1211 14:11:16.493373 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="git-clone" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493381 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="git-clone" Dec 11 14:11:16 crc kubenswrapper[4924]: E1211 14:11:16.493396 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="manage-dockerfile" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493405 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="manage-dockerfile" Dec 11 14:11:16 crc kubenswrapper[4924]: E1211 14:11:16.493423 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="extract-content" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493431 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="extract-content" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493554 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c9d6a5-ddcf-4dd4-8330-671891c95e21" containerName="registry-server" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.493569 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba0a1cf-b779-4907-910e-5b179023d3f5" containerName="docker-build" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.494082 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.500380 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-sndrr" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.504269 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-wp7xt"] Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.552674 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "8ba0a1cf-b779-4907-910e-5b179023d3f5" (UID: "8ba0a1cf-b779-4907-910e-5b179023d3f5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.640652 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt64j\" (UniqueName: \"kubernetes.io/projected/8c30ba41-d1df-4a73-9627-5e06df02a8ba-kube-api-access-zt64j\") pod \"infrawatch-operators-wp7xt\" (UID: \"8c30ba41-d1df-4a73-9627-5e06df02a8ba\") " pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.640954 4924 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/8ba0a1cf-b779-4907-910e-5b179023d3f5-container-storage-root\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.742644 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt64j\" (UniqueName: \"kubernetes.io/projected/8c30ba41-d1df-4a73-9627-5e06df02a8ba-kube-api-access-zt64j\") pod \"infrawatch-operators-wp7xt\" (UID: \"8c30ba41-d1df-4a73-9627-5e06df02a8ba\") " pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.767691 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt64j\" (UniqueName: \"kubernetes.io/projected/8c30ba41-d1df-4a73-9627-5e06df02a8ba-kube-api-access-zt64j\") pod \"infrawatch-operators-wp7xt\" (UID: \"8c30ba41-d1df-4a73-9627-5e06df02a8ba\") " pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:16 crc kubenswrapper[4924]: I1211 14:11:16.821267 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:17 crc kubenswrapper[4924]: I1211 14:11:17.231363 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-wp7xt"] Dec 11 14:11:18 crc kubenswrapper[4924]: I1211 14:11:18.157461 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-wp7xt" event={"ID":"8c30ba41-d1df-4a73-9627-5e06df02a8ba","Type":"ContainerStarted","Data":"90d5b2c163854097390d75d2cc7d8a428299a0f0bb3d367e6efef70c740af4ef"} Dec 11 14:11:20 crc kubenswrapper[4924]: I1211 14:11:20.876977 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-wp7xt"] Dec 11 14:11:21 crc kubenswrapper[4924]: I1211 14:11:21.689216 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-cs65h"] Dec 11 14:11:21 crc kubenswrapper[4924]: I1211 14:11:21.690188 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:21 crc kubenswrapper[4924]: I1211 14:11:21.694063 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-cs65h"] Dec 11 14:11:21 crc kubenswrapper[4924]: I1211 14:11:21.810257 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw4nj\" (UniqueName: \"kubernetes.io/projected/ac4c0937-4399-480b-8e99-f90e9d676d7e-kube-api-access-hw4nj\") pod \"infrawatch-operators-cs65h\" (UID: \"ac4c0937-4399-480b-8e99-f90e9d676d7e\") " pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:21 crc kubenswrapper[4924]: I1211 14:11:21.911386 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw4nj\" (UniqueName: \"kubernetes.io/projected/ac4c0937-4399-480b-8e99-f90e9d676d7e-kube-api-access-hw4nj\") pod \"infrawatch-operators-cs65h\" (UID: \"ac4c0937-4399-480b-8e99-f90e9d676d7e\") " pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:21 crc kubenswrapper[4924]: I1211 14:11:21.930293 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw4nj\" (UniqueName: \"kubernetes.io/projected/ac4c0937-4399-480b-8e99-f90e9d676d7e-kube-api-access-hw4nj\") pod \"infrawatch-operators-cs65h\" (UID: \"ac4c0937-4399-480b-8e99-f90e9d676d7e\") " pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:22 crc kubenswrapper[4924]: I1211 14:11:22.052242 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:22 crc kubenswrapper[4924]: I1211 14:11:22.809560 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-cs65h"] Dec 11 14:11:23 crc kubenswrapper[4924]: I1211 14:11:23.188644 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-cs65h" event={"ID":"ac4c0937-4399-480b-8e99-f90e9d676d7e","Type":"ContainerStarted","Data":"fe0104714e21a6fcaaa7bf8d2366dc548d927865f1424d7d7b7b9ea84efe8ee2"} Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.321748 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-cs65h" event={"ID":"ac4c0937-4399-480b-8e99-f90e9d676d7e","Type":"ContainerStarted","Data":"31e4149e894524c74a5345d9549e24947a7447346f4d98358b7dc4a4ee06c1ec"} Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.324042 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-wp7xt" event={"ID":"8c30ba41-d1df-4a73-9627-5e06df02a8ba","Type":"ContainerStarted","Data":"64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc"} Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.324178 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-wp7xt" podUID="8c30ba41-d1df-4a73-9627-5e06df02a8ba" containerName="registry-server" containerID="cri-o://64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc" gracePeriod=2 Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.344398 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-cs65h" podStartSLOduration=2.932068815 podStartE2EDuration="20.344380939s" podCreationTimestamp="2025-12-11 14:11:21 +0000 UTC" firstStartedPulling="2025-12-11 14:11:22.814510803 +0000 UTC m=+1096.323991780" lastFinishedPulling="2025-12-11 14:11:40.226822927 +0000 UTC m=+1113.736303904" observedRunningTime="2025-12-11 14:11:41.341156927 +0000 UTC m=+1114.850637924" watchObservedRunningTime="2025-12-11 14:11:41.344380939 +0000 UTC m=+1114.853861906" Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.363954 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-wp7xt" podStartSLOduration=2.296676152 podStartE2EDuration="25.363930546s" podCreationTimestamp="2025-12-11 14:11:16 +0000 UTC" firstStartedPulling="2025-12-11 14:11:17.23620958 +0000 UTC m=+1090.745690557" lastFinishedPulling="2025-12-11 14:11:40.303463974 +0000 UTC m=+1113.812944951" observedRunningTime="2025-12-11 14:11:41.355130985 +0000 UTC m=+1114.864611962" watchObservedRunningTime="2025-12-11 14:11:41.363930546 +0000 UTC m=+1114.873411543" Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.726166 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.911438 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt64j\" (UniqueName: \"kubernetes.io/projected/8c30ba41-d1df-4a73-9627-5e06df02a8ba-kube-api-access-zt64j\") pod \"8c30ba41-d1df-4a73-9627-5e06df02a8ba\" (UID: \"8c30ba41-d1df-4a73-9627-5e06df02a8ba\") " Dec 11 14:11:41 crc kubenswrapper[4924]: I1211 14:11:41.918590 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c30ba41-d1df-4a73-9627-5e06df02a8ba-kube-api-access-zt64j" (OuterVolumeSpecName: "kube-api-access-zt64j") pod "8c30ba41-d1df-4a73-9627-5e06df02a8ba" (UID: "8c30ba41-d1df-4a73-9627-5e06df02a8ba"). InnerVolumeSpecName "kube-api-access-zt64j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.013056 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt64j\" (UniqueName: \"kubernetes.io/projected/8c30ba41-d1df-4a73-9627-5e06df02a8ba-kube-api-access-zt64j\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.052928 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.053346 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.082116 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.331000 4924 generic.go:334] "Generic (PLEG): container finished" podID="8c30ba41-d1df-4a73-9627-5e06df02a8ba" containerID="64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc" exitCode=0 Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.331133 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-wp7xt" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.331070 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-wp7xt" event={"ID":"8c30ba41-d1df-4a73-9627-5e06df02a8ba","Type":"ContainerDied","Data":"64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc"} Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.331290 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-wp7xt" event={"ID":"8c30ba41-d1df-4a73-9627-5e06df02a8ba","Type":"ContainerDied","Data":"90d5b2c163854097390d75d2cc7d8a428299a0f0bb3d367e6efef70c740af4ef"} Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.331338 4924 scope.go:117] "RemoveContainer" containerID="64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.344593 4924 scope.go:117] "RemoveContainer" containerID="64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc" Dec 11 14:11:42 crc kubenswrapper[4924]: E1211 14:11:42.344943 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc\": container with ID starting with 64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc not found: ID does not exist" containerID="64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.344975 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc"} err="failed to get container status \"64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc\": rpc error: code = NotFound desc = could not find container \"64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc\": container with ID starting with 64e84ce0cc9c96bf45b3307445183a249a6950be778350652215b750b0ad43dc not found: ID does not exist" Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.367793 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-wp7xt"] Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.372238 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-wp7xt"] Dec 11 14:11:42 crc kubenswrapper[4924]: I1211 14:11:42.790880 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c30ba41-d1df-4a73-9627-5e06df02a8ba" path="/var/lib/kubelet/pods/8c30ba41-d1df-4a73-9627-5e06df02a8ba/volumes" Dec 11 14:11:45 crc kubenswrapper[4924]: I1211 14:11:45.433728 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:11:45 crc kubenswrapper[4924]: I1211 14:11:45.434453 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:11:52 crc kubenswrapper[4924]: I1211 14:11:52.079116 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-cs65h" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.176556 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q"] Dec 11 14:11:53 crc kubenswrapper[4924]: E1211 14:11:53.177144 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c30ba41-d1df-4a73-9627-5e06df02a8ba" containerName="registry-server" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.177161 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c30ba41-d1df-4a73-9627-5e06df02a8ba" containerName="registry-server" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.177301 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c30ba41-d1df-4a73-9627-5e06df02a8ba" containerName="registry-server" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.178129 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.190317 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q"] Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.251727 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.251990 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6chq\" (UniqueName: \"kubernetes.io/projected/84ad49c2-e643-4ed6-9e9a-379854109a8e-kube-api-access-r6chq\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.252134 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.353000 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.353095 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6chq\" (UniqueName: \"kubernetes.io/projected/84ad49c2-e643-4ed6-9e9a-379854109a8e-kube-api-access-r6chq\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.353145 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.353675 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-util\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.353747 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-bundle\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.376183 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6chq\" (UniqueName: \"kubernetes.io/projected/84ad49c2-e643-4ed6-9e9a-379854109a8e-kube-api-access-r6chq\") pod \"cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.497867 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.686094 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q"] Dec 11 14:11:53 crc kubenswrapper[4924]: W1211 14:11:53.695798 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84ad49c2_e643_4ed6_9e9a_379854109a8e.slice/crio-ac66894e59f88b368f4bd85c0cb3c6fbe27daad2f3ce38b459a68e62241054e9 WatchSource:0}: Error finding container ac66894e59f88b368f4bd85c0cb3c6fbe27daad2f3ce38b459a68e62241054e9: Status 404 returned error can't find the container with id ac66894e59f88b368f4bd85c0cb3c6fbe27daad2f3ce38b459a68e62241054e9 Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.955905 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b"] Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.957065 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:53 crc kubenswrapper[4924]: I1211 14:11:53.969523 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b"] Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.061737 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.061834 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkwqm\" (UniqueName: \"kubernetes.io/projected/0291549d-f78a-4986-b217-0aeca5baed7d-kube-api-access-tkwqm\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.061857 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.163381 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.163463 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkwqm\" (UniqueName: \"kubernetes.io/projected/0291549d-f78a-4986-b217-0aeca5baed7d-kube-api-access-tkwqm\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.163485 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.163926 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-util\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.163961 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-bundle\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.184731 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkwqm\" (UniqueName: \"kubernetes.io/projected/0291549d-f78a-4986-b217-0aeca5baed7d-kube-api-access-tkwqm\") pod \"27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.276559 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.400172 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" event={"ID":"84ad49c2-e643-4ed6-9e9a-379854109a8e","Type":"ContainerStarted","Data":"ac66894e59f88b368f4bd85c0cb3c6fbe27daad2f3ce38b459a68e62241054e9"} Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.492318 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b"] Dec 11 14:11:54 crc kubenswrapper[4924]: W1211 14:11:54.502500 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0291549d_f78a_4986_b217_0aeca5baed7d.slice/crio-3a2cf431412e35abd68cb30551469242e0bb045c678050caa8be4479f366c399 WatchSource:0}: Error finding container 3a2cf431412e35abd68cb30551469242e0bb045c678050caa8be4479f366c399: Status 404 returned error can't find the container with id 3a2cf431412e35abd68cb30551469242e0bb045c678050caa8be4479f366c399 Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.752774 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt"] Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.754079 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.757178 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.765302 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt"] Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.872219 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.872969 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.873085 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dkgd\" (UniqueName: \"kubernetes.io/projected/fe27eea4-41d6-4184-8c58-9d160407dd65-kube-api-access-5dkgd\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.974816 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.974861 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dkgd\" (UniqueName: \"kubernetes.io/projected/fe27eea4-41d6-4184-8c58-9d160407dd65-kube-api-access-5dkgd\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.974911 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.975485 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.975607 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:54 crc kubenswrapper[4924]: I1211 14:11:54.991909 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dkgd\" (UniqueName: \"kubernetes.io/projected/fe27eea4-41d6-4184-8c58-9d160407dd65-kube-api-access-5dkgd\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:55 crc kubenswrapper[4924]: I1211 14:11:55.067850 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:11:55 crc kubenswrapper[4924]: I1211 14:11:55.253499 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt"] Dec 11 14:11:55 crc kubenswrapper[4924]: W1211 14:11:55.260796 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe27eea4_41d6_4184_8c58_9d160407dd65.slice/crio-9a5045b57e43ad536db312d950f1ba99a16e0a1c0169cf4d51ae2236c5366681 WatchSource:0}: Error finding container 9a5045b57e43ad536db312d950f1ba99a16e0a1c0169cf4d51ae2236c5366681: Status 404 returned error can't find the container with id 9a5045b57e43ad536db312d950f1ba99a16e0a1c0169cf4d51ae2236c5366681 Dec 11 14:11:55 crc kubenswrapper[4924]: I1211 14:11:55.411533 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" event={"ID":"0291549d-f78a-4986-b217-0aeca5baed7d","Type":"ContainerStarted","Data":"3a2cf431412e35abd68cb30551469242e0bb045c678050caa8be4479f366c399"} Dec 11 14:11:55 crc kubenswrapper[4924]: I1211 14:11:55.412799 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" event={"ID":"fe27eea4-41d6-4184-8c58-9d160407dd65","Type":"ContainerStarted","Data":"9a5045b57e43ad536db312d950f1ba99a16e0a1c0169cf4d51ae2236c5366681"} Dec 11 14:12:04 crc kubenswrapper[4924]: I1211 14:12:04.467216 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" event={"ID":"fe27eea4-41d6-4184-8c58-9d160407dd65","Type":"ContainerStarted","Data":"58b39edaa86adb179bd1ddbc57d040b1c5b297e40412bdce38e86d38703a7ae7"} Dec 11 14:12:04 crc kubenswrapper[4924]: I1211 14:12:04.469078 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" event={"ID":"0291549d-f78a-4986-b217-0aeca5baed7d","Type":"ContainerStarted","Data":"64b09938defdd1473849a50a8001b0829c4d3f4e56c6e15977a81b9b3318a731"} Dec 11 14:12:04 crc kubenswrapper[4924]: I1211 14:12:04.470723 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" event={"ID":"84ad49c2-e643-4ed6-9e9a-379854109a8e","Type":"ContainerStarted","Data":"969cf679afff6bbc78c2615cd58b9b86ac26bdb19a4ba55d8a101c77f9ee9308"} Dec 11 14:12:09 crc kubenswrapper[4924]: I1211 14:12:09.505618 4924 generic.go:334] "Generic (PLEG): container finished" podID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerID="969cf679afff6bbc78c2615cd58b9b86ac26bdb19a4ba55d8a101c77f9ee9308" exitCode=0 Dec 11 14:12:09 crc kubenswrapper[4924]: I1211 14:12:09.505745 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" event={"ID":"84ad49c2-e643-4ed6-9e9a-379854109a8e","Type":"ContainerDied","Data":"969cf679afff6bbc78c2615cd58b9b86ac26bdb19a4ba55d8a101c77f9ee9308"} Dec 11 14:12:10 crc kubenswrapper[4924]: I1211 14:12:10.515870 4924 generic.go:334] "Generic (PLEG): container finished" podID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerID="58b39edaa86adb179bd1ddbc57d040b1c5b297e40412bdce38e86d38703a7ae7" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[4924]: I1211 14:12:10.515974 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" event={"ID":"fe27eea4-41d6-4184-8c58-9d160407dd65","Type":"ContainerDied","Data":"58b39edaa86adb179bd1ddbc57d040b1c5b297e40412bdce38e86d38703a7ae7"} Dec 11 14:12:11 crc kubenswrapper[4924]: I1211 14:12:11.536393 4924 generic.go:334] "Generic (PLEG): container finished" podID="0291549d-f78a-4986-b217-0aeca5baed7d" containerID="64b09938defdd1473849a50a8001b0829c4d3f4e56c6e15977a81b9b3318a731" exitCode=0 Dec 11 14:12:11 crc kubenswrapper[4924]: I1211 14:12:11.536454 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" event={"ID":"0291549d-f78a-4986-b217-0aeca5baed7d","Type":"ContainerDied","Data":"64b09938defdd1473849a50a8001b0829c4d3f4e56c6e15977a81b9b3318a731"} Dec 11 14:12:15 crc kubenswrapper[4924]: I1211 14:12:15.433019 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:12:15 crc kubenswrapper[4924]: I1211 14:12:15.433408 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:12:16 crc kubenswrapper[4924]: I1211 14:12:16.566738 4924 generic.go:334] "Generic (PLEG): container finished" podID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerID="f8f7f1b68c1a3071f1f2b7b80d32c23431e82477d45d769395ef8aeff2155bf2" exitCode=0 Dec 11 14:12:16 crc kubenswrapper[4924]: I1211 14:12:16.566795 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" event={"ID":"84ad49c2-e643-4ed6-9e9a-379854109a8e","Type":"ContainerDied","Data":"f8f7f1b68c1a3071f1f2b7b80d32c23431e82477d45d769395ef8aeff2155bf2"} Dec 11 14:12:18 crc kubenswrapper[4924]: I1211 14:12:18.579874 4924 generic.go:334] "Generic (PLEG): container finished" podID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerID="38f7b5427cb65a6a08c10228e192d8b8ecc71a0a0bc50f10801115fb12bdc7ef" exitCode=0 Dec 11 14:12:18 crc kubenswrapper[4924]: I1211 14:12:18.579951 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" event={"ID":"fe27eea4-41d6-4184-8c58-9d160407dd65","Type":"ContainerDied","Data":"38f7b5427cb65a6a08c10228e192d8b8ecc71a0a0bc50f10801115fb12bdc7ef"} Dec 11 14:12:18 crc kubenswrapper[4924]: I1211 14:12:18.584672 4924 generic.go:334] "Generic (PLEG): container finished" podID="0291549d-f78a-4986-b217-0aeca5baed7d" containerID="259dfcef4bc8f92ca0697c16f397ecf03e4e7904c6191846ac55dc0cddb5975a" exitCode=0 Dec 11 14:12:18 crc kubenswrapper[4924]: I1211 14:12:18.584756 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" event={"ID":"0291549d-f78a-4986-b217-0aeca5baed7d","Type":"ContainerDied","Data":"259dfcef4bc8f92ca0697c16f397ecf03e4e7904c6191846ac55dc0cddb5975a"} Dec 11 14:12:18 crc kubenswrapper[4924]: I1211 14:12:18.595072 4924 generic.go:334] "Generic (PLEG): container finished" podID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerID="bb35b3efc200a05a879974eea843bf2c37b5b188d2d13d3f06f595948a7bb8e7" exitCode=0 Dec 11 14:12:18 crc kubenswrapper[4924]: I1211 14:12:18.595119 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" event={"ID":"84ad49c2-e643-4ed6-9e9a-379854109a8e","Type":"ContainerDied","Data":"bb35b3efc200a05a879974eea843bf2c37b5b188d2d13d3f06f595948a7bb8e7"} Dec 11 14:12:19 crc kubenswrapper[4924]: I1211 14:12:19.602832 4924 generic.go:334] "Generic (PLEG): container finished" podID="0291549d-f78a-4986-b217-0aeca5baed7d" containerID="8d3c65cb4901bde099b075bd60b9a8f25b4b83ae0c4617b3e4ef65720f80116d" exitCode=0 Dec 11 14:12:19 crc kubenswrapper[4924]: I1211 14:12:19.602903 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" event={"ID":"0291549d-f78a-4986-b217-0aeca5baed7d","Type":"ContainerDied","Data":"8d3c65cb4901bde099b075bd60b9a8f25b4b83ae0c4617b3e4ef65720f80116d"} Dec 11 14:12:19 crc kubenswrapper[4924]: I1211 14:12:19.605514 4924 generic.go:334] "Generic (PLEG): container finished" podID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerID="91df83ef6a5cd4afbfc914af1f359ad9cc29735b56ec90429a0b674a9e6d8507" exitCode=0 Dec 11 14:12:19 crc kubenswrapper[4924]: I1211 14:12:19.605581 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" event={"ID":"fe27eea4-41d6-4184-8c58-9d160407dd65","Type":"ContainerDied","Data":"91df83ef6a5cd4afbfc914af1f359ad9cc29735b56ec90429a0b674a9e6d8507"} Dec 11 14:12:19 crc kubenswrapper[4924]: I1211 14:12:19.817039 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.001168 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-util\") pod \"84ad49c2-e643-4ed6-9e9a-379854109a8e\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.001409 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-bundle\") pod \"84ad49c2-e643-4ed6-9e9a-379854109a8e\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.001488 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6chq\" (UniqueName: \"kubernetes.io/projected/84ad49c2-e643-4ed6-9e9a-379854109a8e-kube-api-access-r6chq\") pod \"84ad49c2-e643-4ed6-9e9a-379854109a8e\" (UID: \"84ad49c2-e643-4ed6-9e9a-379854109a8e\") " Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.003679 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-bundle" (OuterVolumeSpecName: "bundle") pod "84ad49c2-e643-4ed6-9e9a-379854109a8e" (UID: "84ad49c2-e643-4ed6-9e9a-379854109a8e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.015778 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-util" (OuterVolumeSpecName: "util") pod "84ad49c2-e643-4ed6-9e9a-379854109a8e" (UID: "84ad49c2-e643-4ed6-9e9a-379854109a8e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.018350 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ad49c2-e643-4ed6-9e9a-379854109a8e-kube-api-access-r6chq" (OuterVolumeSpecName: "kube-api-access-r6chq") pod "84ad49c2-e643-4ed6-9e9a-379854109a8e" (UID: "84ad49c2-e643-4ed6-9e9a-379854109a8e"). InnerVolumeSpecName "kube-api-access-r6chq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.102489 4924 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.102531 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6chq\" (UniqueName: \"kubernetes.io/projected/84ad49c2-e643-4ed6-9e9a-379854109a8e-kube-api-access-r6chq\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.102544 4924 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/84ad49c2-e643-4ed6-9e9a-379854109a8e-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.613733 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" event={"ID":"84ad49c2-e643-4ed6-9e9a-379854109a8e","Type":"ContainerDied","Data":"ac66894e59f88b368f4bd85c0cb3c6fbe27daad2f3ce38b459a68e62241054e9"} Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.614107 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac66894e59f88b368f4bd85c0cb3c6fbe27daad2f3ce38b459a68e62241054e9" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.614074 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/cd07ebce3b618aaffa8e106dab3e8eb93287fbb0e4c5a9c0f6ea8fc7eb4mn5q" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.936633 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:12:20 crc kubenswrapper[4924]: I1211 14:12:20.943423 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.013007 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkwqm\" (UniqueName: \"kubernetes.io/projected/0291549d-f78a-4986-b217-0aeca5baed7d-kube-api-access-tkwqm\") pod \"0291549d-f78a-4986-b217-0aeca5baed7d\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.013047 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dkgd\" (UniqueName: \"kubernetes.io/projected/fe27eea4-41d6-4184-8c58-9d160407dd65-kube-api-access-5dkgd\") pod \"fe27eea4-41d6-4184-8c58-9d160407dd65\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.013070 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-util\") pod \"fe27eea4-41d6-4184-8c58-9d160407dd65\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.013104 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-bundle\") pod \"fe27eea4-41d6-4184-8c58-9d160407dd65\" (UID: \"fe27eea4-41d6-4184-8c58-9d160407dd65\") " Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.013144 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-bundle\") pod \"0291549d-f78a-4986-b217-0aeca5baed7d\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.013197 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-util\") pod \"0291549d-f78a-4986-b217-0aeca5baed7d\" (UID: \"0291549d-f78a-4986-b217-0aeca5baed7d\") " Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.014206 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-bundle" (OuterVolumeSpecName: "bundle") pod "fe27eea4-41d6-4184-8c58-9d160407dd65" (UID: "fe27eea4-41d6-4184-8c58-9d160407dd65"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.014801 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-bundle" (OuterVolumeSpecName: "bundle") pod "0291549d-f78a-4986-b217-0aeca5baed7d" (UID: "0291549d-f78a-4986-b217-0aeca5baed7d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.020443 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0291549d-f78a-4986-b217-0aeca5baed7d-kube-api-access-tkwqm" (OuterVolumeSpecName: "kube-api-access-tkwqm") pod "0291549d-f78a-4986-b217-0aeca5baed7d" (UID: "0291549d-f78a-4986-b217-0aeca5baed7d"). InnerVolumeSpecName "kube-api-access-tkwqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.025393 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-util" (OuterVolumeSpecName: "util") pod "fe27eea4-41d6-4184-8c58-9d160407dd65" (UID: "fe27eea4-41d6-4184-8c58-9d160407dd65"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.025705 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe27eea4-41d6-4184-8c58-9d160407dd65-kube-api-access-5dkgd" (OuterVolumeSpecName: "kube-api-access-5dkgd") pod "fe27eea4-41d6-4184-8c58-9d160407dd65" (UID: "fe27eea4-41d6-4184-8c58-9d160407dd65"). InnerVolumeSpecName "kube-api-access-5dkgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.030804 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-util" (OuterVolumeSpecName: "util") pod "0291549d-f78a-4986-b217-0aeca5baed7d" (UID: "0291549d-f78a-4986-b217-0aeca5baed7d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.113805 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkwqm\" (UniqueName: \"kubernetes.io/projected/0291549d-f78a-4986-b217-0aeca5baed7d-kube-api-access-tkwqm\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.114087 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dkgd\" (UniqueName: \"kubernetes.io/projected/fe27eea4-41d6-4184-8c58-9d160407dd65-kube-api-access-5dkgd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.114184 4924 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.114368 4924 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fe27eea4-41d6-4184-8c58-9d160407dd65-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.114447 4924 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.114519 4924 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0291549d-f78a-4986-b217-0aeca5baed7d-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.620077 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" event={"ID":"0291549d-f78a-4986-b217-0aeca5baed7d","Type":"ContainerDied","Data":"3a2cf431412e35abd68cb30551469242e0bb045c678050caa8be4479f366c399"} Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.620121 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a2cf431412e35abd68cb30551469242e0bb045c678050caa8be4479f366c399" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.620197 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/27e4a3b82b847aaaac340f98fd9ec51c99f28242b589c6c251a26fbc3bvxd8b" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.625317 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" event={"ID":"fe27eea4-41d6-4184-8c58-9d160407dd65","Type":"ContainerDied","Data":"9a5045b57e43ad536db312d950f1ba99a16e0a1c0169cf4d51ae2236c5366681"} Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.625395 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5045b57e43ad536db312d950f1ba99a16e0a1c0169cf4d51ae2236c5366681" Dec 11 14:12:21 crc kubenswrapper[4924]: I1211 14:12:21.625535 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694142 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-6467cb9984-k9ft9"] Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.694874 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="util" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694891 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="util" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.694904 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="pull" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694913 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="pull" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.694933 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="util" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694941 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="util" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.694950 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694958 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.694970 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="pull" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694979 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="pull" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.694989 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.694996 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.695011 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695018 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.695031 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="util" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695039 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="util" Dec 11 14:12:27 crc kubenswrapper[4924]: E1211 14:12:27.695051 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="pull" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695059 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="pull" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695189 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ad49c2-e643-4ed6-9e9a-379854109a8e" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695206 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe27eea4-41d6-4184-8c58-9d160407dd65" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695215 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="0291549d-f78a-4986-b217-0aeca5baed7d" containerName="extract" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.695749 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.697656 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-mv8c5" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.714172 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6467cb9984-k9ft9"] Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.797438 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlg55\" (UniqueName: \"kubernetes.io/projected/7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff-kube-api-access-qlg55\") pod \"smart-gateway-operator-6467cb9984-k9ft9\" (UID: \"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff\") " pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.797604 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff-runner\") pod \"smart-gateway-operator-6467cb9984-k9ft9\" (UID: \"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff\") " pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.898848 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlg55\" (UniqueName: \"kubernetes.io/projected/7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff-kube-api-access-qlg55\") pod \"smart-gateway-operator-6467cb9984-k9ft9\" (UID: \"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff\") " pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.898971 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff-runner\") pod \"smart-gateway-operator-6467cb9984-k9ft9\" (UID: \"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff\") " pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.899458 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff-runner\") pod \"smart-gateway-operator-6467cb9984-k9ft9\" (UID: \"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff\") " pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:27 crc kubenswrapper[4924]: I1211 14:12:27.917370 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlg55\" (UniqueName: \"kubernetes.io/projected/7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff-kube-api-access-qlg55\") pod \"smart-gateway-operator-6467cb9984-k9ft9\" (UID: \"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff\") " pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:28 crc kubenswrapper[4924]: I1211 14:12:28.012984 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" Dec 11 14:12:28 crc kubenswrapper[4924]: I1211 14:12:28.445605 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-6467cb9984-k9ft9"] Dec 11 14:12:28 crc kubenswrapper[4924]: I1211 14:12:28.671754 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" event={"ID":"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff","Type":"ContainerStarted","Data":"59ec70bd92846399771d3de2cd5258b71f8874e26c1ce79a39396226f7da72df"} Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.145877 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz"] Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.146818 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.149500 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-cd92v" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.163381 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz"] Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.214638 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvkh\" (UniqueName: \"kubernetes.io/projected/5ea2b316-8ef3-4af3-9c0f-92064f0934d2-kube-api-access-vfvkh\") pod \"service-telemetry-operator-75bb6547f5-rjpvz\" (UID: \"5ea2b316-8ef3-4af3-9c0f-92064f0934d2\") " pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.214702 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/5ea2b316-8ef3-4af3-9c0f-92064f0934d2-runner\") pod \"service-telemetry-operator-75bb6547f5-rjpvz\" (UID: \"5ea2b316-8ef3-4af3-9c0f-92064f0934d2\") " pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.315970 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvkh\" (UniqueName: \"kubernetes.io/projected/5ea2b316-8ef3-4af3-9c0f-92064f0934d2-kube-api-access-vfvkh\") pod \"service-telemetry-operator-75bb6547f5-rjpvz\" (UID: \"5ea2b316-8ef3-4af3-9c0f-92064f0934d2\") " pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.316282 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/5ea2b316-8ef3-4af3-9c0f-92064f0934d2-runner\") pod \"service-telemetry-operator-75bb6547f5-rjpvz\" (UID: \"5ea2b316-8ef3-4af3-9c0f-92064f0934d2\") " pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.316884 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/5ea2b316-8ef3-4af3-9c0f-92064f0934d2-runner\") pod \"service-telemetry-operator-75bb6547f5-rjpvz\" (UID: \"5ea2b316-8ef3-4af3-9c0f-92064f0934d2\") " pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.338582 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvkh\" (UniqueName: \"kubernetes.io/projected/5ea2b316-8ef3-4af3-9c0f-92064f0934d2-kube-api-access-vfvkh\") pod \"service-telemetry-operator-75bb6547f5-rjpvz\" (UID: \"5ea2b316-8ef3-4af3-9c0f-92064f0934d2\") " pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.463199 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" Dec 11 14:12:29 crc kubenswrapper[4924]: I1211 14:12:29.786681 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz"] Dec 11 14:12:29 crc kubenswrapper[4924]: W1211 14:12:29.792907 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ea2b316_8ef3_4af3_9c0f_92064f0934d2.slice/crio-7088dc47cff5d4336faed7cb5427568d6f2986b28da8fdb21d5ab5b8b4e2e8ed WatchSource:0}: Error finding container 7088dc47cff5d4336faed7cb5427568d6f2986b28da8fdb21d5ab5b8b4e2e8ed: Status 404 returned error can't find the container with id 7088dc47cff5d4336faed7cb5427568d6f2986b28da8fdb21d5ab5b8b4e2e8ed Dec 11 14:12:30 crc kubenswrapper[4924]: I1211 14:12:30.687299 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" event={"ID":"5ea2b316-8ef3-4af3-9c0f-92064f0934d2","Type":"ContainerStarted","Data":"7088dc47cff5d4336faed7cb5427568d6f2986b28da8fdb21d5ab5b8b4e2e8ed"} Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.277807 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-9zdz4"] Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.283007 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.287588 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-frnw2" Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.291072 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-9zdz4"] Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.464040 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dr5k\" (UniqueName: \"kubernetes.io/projected/3b402e78-912a-4222-aded-7fd2d749a577-kube-api-access-5dr5k\") pod \"interconnect-operator-5bb49f789d-9zdz4\" (UID: \"3b402e78-912a-4222-aded-7fd2d749a577\") " pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.565620 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dr5k\" (UniqueName: \"kubernetes.io/projected/3b402e78-912a-4222-aded-7fd2d749a577-kube-api-access-5dr5k\") pod \"interconnect-operator-5bb49f789d-9zdz4\" (UID: \"3b402e78-912a-4222-aded-7fd2d749a577\") " pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.585866 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dr5k\" (UniqueName: \"kubernetes.io/projected/3b402e78-912a-4222-aded-7fd2d749a577-kube-api-access-5dr5k\") pod \"interconnect-operator-5bb49f789d-9zdz4\" (UID: \"3b402e78-912a-4222-aded-7fd2d749a577\") " pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" Dec 11 14:12:32 crc kubenswrapper[4924]: I1211 14:12:32.617105 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" Dec 11 14:12:33 crc kubenswrapper[4924]: I1211 14:12:33.056621 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-9zdz4"] Dec 11 14:12:33 crc kubenswrapper[4924]: W1211 14:12:33.071488 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b402e78_912a_4222_aded_7fd2d749a577.slice/crio-53024e418e0e1bb1aea400625fa47925773f780d043d4c228b2254b3fc3567f3 WatchSource:0}: Error finding container 53024e418e0e1bb1aea400625fa47925773f780d043d4c228b2254b3fc3567f3: Status 404 returned error can't find the container with id 53024e418e0e1bb1aea400625fa47925773f780d043d4c228b2254b3fc3567f3 Dec 11 14:12:33 crc kubenswrapper[4924]: I1211 14:12:33.709010 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" event={"ID":"3b402e78-912a-4222-aded-7fd2d749a577","Type":"ContainerStarted","Data":"53024e418e0e1bb1aea400625fa47925773f780d043d4c228b2254b3fc3567f3"} Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.433002 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.433605 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.433647 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.434234 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93e4fd4fa7a0ea185c1b0a02c76e4148f87fb1524a936a5e232d6e0e38f7bfdc"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.434291 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://93e4fd4fa7a0ea185c1b0a02c76e4148f87fb1524a936a5e232d6e0e38f7bfdc" gracePeriod=600 Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.812237 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="93e4fd4fa7a0ea185c1b0a02c76e4148f87fb1524a936a5e232d6e0e38f7bfdc" exitCode=0 Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.812572 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"93e4fd4fa7a0ea185c1b0a02c76e4148f87fb1524a936a5e232d6e0e38f7bfdc"} Dec 11 14:12:45 crc kubenswrapper[4924]: I1211 14:12:45.812625 4924 scope.go:117] "RemoveContainer" containerID="7e4cea8eb422e0d935dde86db1abf1fbdf5c2a1faa07d50193b201cd5df925d4" Dec 11 14:12:57 crc kubenswrapper[4924]: E1211 14:12:57.896939 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Dec 11 14:12:57 crc kubenswrapper[4924]: E1211 14:12:57.897784 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:quay.io/infrawatch/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1765406851,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qlg55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-6467cb9984-k9ft9_service-telemetry(7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:12:57 crc kubenswrapper[4924]: E1211 14:12:57.898938 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" podUID="7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff" Dec 11 14:12:57 crc kubenswrapper[4924]: I1211 14:12:57.913581 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"5861f804b3b60124505c75e8ca85aada7f1b2041baaf2261006aa9edcedeb752"} Dec 11 14:12:58 crc kubenswrapper[4924]: E1211 14:12:58.921989 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" podUID="7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff" Dec 11 14:13:01 crc kubenswrapper[4924]: I1211 14:13:01.946221 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" event={"ID":"3b402e78-912a-4222-aded-7fd2d749a577","Type":"ContainerStarted","Data":"39813df31b172013e70af5b997ec70d981835f97cc29eb5cfa06391cc8742573"} Dec 11 14:13:01 crc kubenswrapper[4924]: I1211 14:13:01.967680 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-9zdz4" podStartSLOduration=12.871537431 podStartE2EDuration="29.967663223s" podCreationTimestamp="2025-12-11 14:12:32 +0000 UTC" firstStartedPulling="2025-12-11 14:12:33.075156753 +0000 UTC m=+1166.584637720" lastFinishedPulling="2025-12-11 14:12:50.171282535 +0000 UTC m=+1183.680763512" observedRunningTime="2025-12-11 14:13:01.964258426 +0000 UTC m=+1195.473739403" watchObservedRunningTime="2025-12-11 14:13:01.967663223 +0000 UTC m=+1195.477144200" Dec 11 14:13:02 crc kubenswrapper[4924]: I1211 14:13:02.953304 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" event={"ID":"5ea2b316-8ef3-4af3-9c0f-92064f0934d2","Type":"ContainerStarted","Data":"5bbc2e7e9dbbcffb743f937cd07729f49664f54c210382bb49e9f2233c74fb9e"} Dec 11 14:13:02 crc kubenswrapper[4924]: I1211 14:13:02.968715 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-75bb6547f5-rjpvz" podStartSLOduration=1.7677032289999999 podStartE2EDuration="33.968696658s" podCreationTimestamp="2025-12-11 14:12:29 +0000 UTC" firstStartedPulling="2025-12-11 14:12:29.801667964 +0000 UTC m=+1163.311148941" lastFinishedPulling="2025-12-11 14:13:02.002661393 +0000 UTC m=+1195.512142370" observedRunningTime="2025-12-11 14:13:02.966355481 +0000 UTC m=+1196.475836468" watchObservedRunningTime="2025-12-11 14:13:02.968696658 +0000 UTC m=+1196.478177635" Dec 11 14:13:23 crc kubenswrapper[4924]: I1211 14:13:23.079022 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" event={"ID":"7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff","Type":"ContainerStarted","Data":"7988ad7fdbf03a2412759ad575c2adfff313eeb010276f04b5aee1b7afac8e85"} Dec 11 14:13:23 crc kubenswrapper[4924]: I1211 14:13:23.101114 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-6467cb9984-k9ft9" podStartSLOduration=1.977658363 podStartE2EDuration="56.10109732s" podCreationTimestamp="2025-12-11 14:12:27 +0000 UTC" firstStartedPulling="2025-12-11 14:12:28.454224386 +0000 UTC m=+1161.963705363" lastFinishedPulling="2025-12-11 14:13:22.577663343 +0000 UTC m=+1216.087144320" observedRunningTime="2025-12-11 14:13:23.097303801 +0000 UTC m=+1216.606784788" watchObservedRunningTime="2025-12-11 14:13:23.10109732 +0000 UTC m=+1216.610578297" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.059305 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-w8jbd"] Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.060849 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.062543 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.064533 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.064807 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.065051 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-wcmkq" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.066182 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.069027 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.072253 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.072565 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-w8jbd"] Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.165732 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b61d340-0e65-430b-a47b-d874700a8641-sasl-config\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.165813 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-sasl-users\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.166003 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.166157 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.166237 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hss\" (UniqueName: \"kubernetes.io/projected/5b61d340-0e65-430b-a47b-d874700a8641-kube-api-access-z8hss\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.166564 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.166702 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.268694 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.268807 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.268892 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8hss\" (UniqueName: \"kubernetes.io/projected/5b61d340-0e65-430b-a47b-d874700a8641-kube-api-access-z8hss\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.268966 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.268996 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.269022 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b61d340-0e65-430b-a47b-d874700a8641-sasl-config\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.269055 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-sasl-users\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.270288 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b61d340-0e65-430b-a47b-d874700a8641-sasl-config\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.275078 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.275358 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.275904 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.275915 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.281959 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-sasl-users\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.300416 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8hss\" (UniqueName: \"kubernetes.io/projected/5b61d340-0e65-430b-a47b-d874700a8641-kube-api-access-z8hss\") pod \"default-interconnect-68864d46cb-w8jbd\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:38 crc kubenswrapper[4924]: I1211 14:13:38.389525 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:13:39 crc kubenswrapper[4924]: I1211 14:13:39.144392 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-w8jbd"] Dec 11 14:13:39 crc kubenswrapper[4924]: W1211 14:13:39.165710 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b61d340_0e65_430b_a47b_d874700a8641.slice/crio-c26bcd7684d0ddc34e79ffb51802ce96701618689c27cda8afd9985366f980d8 WatchSource:0}: Error finding container c26bcd7684d0ddc34e79ffb51802ce96701618689c27cda8afd9985366f980d8: Status 404 returned error can't find the container with id c26bcd7684d0ddc34e79ffb51802ce96701618689c27cda8afd9985366f980d8 Dec 11 14:13:39 crc kubenswrapper[4924]: I1211 14:13:39.169612 4924 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:13:39 crc kubenswrapper[4924]: I1211 14:13:39.177575 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" event={"ID":"5b61d340-0e65-430b-a47b-d874700a8641","Type":"ContainerStarted","Data":"c26bcd7684d0ddc34e79ffb51802ce96701618689c27cda8afd9985366f980d8"} Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.796280 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.798518 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805021 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805021 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805192 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805210 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805412 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805412 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-8nz4b" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.805713 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.806156 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.811104 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834120 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-config\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834171 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-tls-assets\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834212 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834262 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834285 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-config-out\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834311 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-web-config\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834349 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834367 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834387 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.834411 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltzzz\" (UniqueName: \"kubernetes.io/projected/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-kube-api-access-ltzzz\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.935365 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-web-config\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.935425 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.935450 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.935498 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.936628 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.936677 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltzzz\" (UniqueName: \"kubernetes.io/projected/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-kube-api-access-ltzzz\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.936688 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.936740 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-config\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.936766 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-tls-assets\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.937166 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.937208 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.937249 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-config-out\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: E1211 14:13:49.937310 4924 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 11 14:13:49 crc kubenswrapper[4924]: E1211 14:13:49.937375 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls podName:b503f5e8-7bbc-48a3-aed9-83ebaebbab33 nodeName:}" failed. No retries permitted until 2025-12-11 14:13:50.437358905 +0000 UTC m=+1243.946839882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "b503f5e8-7bbc-48a3-aed9-83ebaebbab33") : secret "default-prometheus-proxy-tls" not found Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.938990 4924 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.939155 4924 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e5a811cee2d8f200c472ef9a4ddcc29120772ab44c99818a97e65900417228b5/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.941165 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-config-out\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.941800 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-config\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.942581 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-web-config\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.942950 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-tls-assets\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.947558 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.955104 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltzzz\" (UniqueName: \"kubernetes.io/projected/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-kube-api-access-ltzzz\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:49 crc kubenswrapper[4924]: I1211 14:13:49.970882 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d49a6ba7-3754-43de-bb0f-87d15cf5180c\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:50 crc kubenswrapper[4924]: I1211 14:13:50.245630 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" event={"ID":"5b61d340-0e65-430b-a47b-d874700a8641","Type":"ContainerStarted","Data":"6d7208de1b24a25dfe545e9abfc24b74aa1ade3bf11ba375c9533a3d10d5bda6"} Dec 11 14:13:50 crc kubenswrapper[4924]: I1211 14:13:50.267191 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" podStartSLOduration=1.998962676 podStartE2EDuration="12.267175379s" podCreationTimestamp="2025-12-11 14:13:38 +0000 UTC" firstStartedPulling="2025-12-11 14:13:39.169239827 +0000 UTC m=+1232.678720804" lastFinishedPulling="2025-12-11 14:13:49.43745253 +0000 UTC m=+1242.946933507" observedRunningTime="2025-12-11 14:13:50.264791211 +0000 UTC m=+1243.774272188" watchObservedRunningTime="2025-12-11 14:13:50.267175379 +0000 UTC m=+1243.776656366" Dec 11 14:13:50 crc kubenswrapper[4924]: I1211 14:13:50.444903 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:50 crc kubenswrapper[4924]: E1211 14:13:50.445128 4924 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 11 14:13:50 crc kubenswrapper[4924]: E1211 14:13:50.445212 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls podName:b503f5e8-7bbc-48a3-aed9-83ebaebbab33 nodeName:}" failed. No retries permitted until 2025-12-11 14:13:51.445190506 +0000 UTC m=+1244.954671503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "b503f5e8-7bbc-48a3-aed9-83ebaebbab33") : secret "default-prometheus-proxy-tls" not found Dec 11 14:13:51 crc kubenswrapper[4924]: I1211 14:13:51.460026 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:51 crc kubenswrapper[4924]: I1211 14:13:51.465023 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/b503f5e8-7bbc-48a3-aed9-83ebaebbab33-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"b503f5e8-7bbc-48a3-aed9-83ebaebbab33\") " pod="service-telemetry/prometheus-default-0" Dec 11 14:13:51 crc kubenswrapper[4924]: I1211 14:13:51.619587 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 11 14:13:51 crc kubenswrapper[4924]: I1211 14:13:51.817200 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 11 14:13:51 crc kubenswrapper[4924]: W1211 14:13:51.825427 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb503f5e8_7bbc_48a3_aed9_83ebaebbab33.slice/crio-1c2788179d13e09f5a6138c4ba8eaa259d04f5678fdc98907f3c1f270e9b0326 WatchSource:0}: Error finding container 1c2788179d13e09f5a6138c4ba8eaa259d04f5678fdc98907f3c1f270e9b0326: Status 404 returned error can't find the container with id 1c2788179d13e09f5a6138c4ba8eaa259d04f5678fdc98907f3c1f270e9b0326 Dec 11 14:13:52 crc kubenswrapper[4924]: I1211 14:13:52.258918 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"b503f5e8-7bbc-48a3-aed9-83ebaebbab33","Type":"ContainerStarted","Data":"1c2788179d13e09f5a6138c4ba8eaa259d04f5678fdc98907f3c1f270e9b0326"} Dec 11 14:13:56 crc kubenswrapper[4924]: I1211 14:13:56.285600 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"b503f5e8-7bbc-48a3-aed9-83ebaebbab33","Type":"ContainerStarted","Data":"aa17491587a066ed0e62de7a5b119193c2a2de591f5ef0ab61c23ea7db556085"} Dec 11 14:13:59 crc kubenswrapper[4924]: I1211 14:13:59.720834 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4"] Dec 11 14:13:59 crc kubenswrapper[4924]: I1211 14:13:59.722929 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" Dec 11 14:13:59 crc kubenswrapper[4924]: I1211 14:13:59.732293 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4"] Dec 11 14:13:59 crc kubenswrapper[4924]: I1211 14:13:59.876792 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs29w\" (UniqueName: \"kubernetes.io/projected/4d1f4a5b-ce7c-4386-ba50-c36ff3de3686-kube-api-access-zs29w\") pod \"default-snmp-webhook-78bcbbdcff-9nvf4\" (UID: \"4d1f4a5b-ce7c-4386-ba50-c36ff3de3686\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" Dec 11 14:13:59 crc kubenswrapper[4924]: I1211 14:13:59.978421 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs29w\" (UniqueName: \"kubernetes.io/projected/4d1f4a5b-ce7c-4386-ba50-c36ff3de3686-kube-api-access-zs29w\") pod \"default-snmp-webhook-78bcbbdcff-9nvf4\" (UID: \"4d1f4a5b-ce7c-4386-ba50-c36ff3de3686\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" Dec 11 14:14:00 crc kubenswrapper[4924]: I1211 14:14:00.000438 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs29w\" (UniqueName: \"kubernetes.io/projected/4d1f4a5b-ce7c-4386-ba50-c36ff3de3686-kube-api-access-zs29w\") pod \"default-snmp-webhook-78bcbbdcff-9nvf4\" (UID: \"4d1f4a5b-ce7c-4386-ba50-c36ff3de3686\") " pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" Dec 11 14:14:00 crc kubenswrapper[4924]: I1211 14:14:00.050487 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" Dec 11 14:14:00 crc kubenswrapper[4924]: I1211 14:14:00.573731 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4"] Dec 11 14:14:01 crc kubenswrapper[4924]: I1211 14:14:01.323318 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" event={"ID":"4d1f4a5b-ce7c-4386-ba50-c36ff3de3686","Type":"ContainerStarted","Data":"cfcf04ab414e81fc406089608bfcbf4f408f3196566147a8e18b400ddf37dfc1"} Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.605702 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.619415 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.622953 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.623450 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.623772 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.623892 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.624203 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.625988 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-bprvl" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.629195 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787482 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49243203-c21f-40a7-b19b-da1632d7ede1-tls-assets\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787551 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vhkz\" (UniqueName: \"kubernetes.io/projected/49243203-c21f-40a7-b19b-da1632d7ede1-kube-api-access-4vhkz\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787660 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787737 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787780 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-web-config\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787810 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49243203-c21f-40a7-b19b-da1632d7ede1-config-out\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787876 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787962 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.787991 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-config-volume\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.889903 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.889980 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-config-volume\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890462 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49243203-c21f-40a7-b19b-da1632d7ede1-tls-assets\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890501 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vhkz\" (UniqueName: \"kubernetes.io/projected/49243203-c21f-40a7-b19b-da1632d7ede1-kube-api-access-4vhkz\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890553 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890589 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890623 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-web-config\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890659 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49243203-c21f-40a7-b19b-da1632d7ede1-config-out\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.890719 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: E1211 14:14:03.890820 4924 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:03 crc kubenswrapper[4924]: E1211 14:14:03.890891 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls podName:49243203-c21f-40a7-b19b-da1632d7ede1 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:04.390871195 +0000 UTC m=+1257.900352162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "49243203-c21f-40a7-b19b-da1632d7ede1") : secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.895560 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49243203-c21f-40a7-b19b-da1632d7ede1-config-out\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.896249 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-web-config\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.896281 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.896414 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49243203-c21f-40a7-b19b-da1632d7ede1-tls-assets\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.897109 4924 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.897139 4924 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f6b2490ea8da6bb91b976757d93b5a3fdb840ecfcccde42e1c3a8e9695e1140/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.898249 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-config-volume\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.918715 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.923497 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vhkz\" (UniqueName: \"kubernetes.io/projected/49243203-c21f-40a7-b19b-da1632d7ede1-kube-api-access-4vhkz\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:03 crc kubenswrapper[4924]: I1211 14:14:03.943873 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ab135a70-c5e9-40e0-bbfe-8dfd6bcd8a5e\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:04 crc kubenswrapper[4924]: I1211 14:14:04.398436 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:04 crc kubenswrapper[4924]: E1211 14:14:04.399128 4924 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:04 crc kubenswrapper[4924]: E1211 14:14:04.399249 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls podName:49243203-c21f-40a7-b19b-da1632d7ede1 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:05.399225721 +0000 UTC m=+1258.908706698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "49243203-c21f-40a7-b19b-da1632d7ede1") : secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:05 crc kubenswrapper[4924]: I1211 14:14:05.414648 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:05 crc kubenswrapper[4924]: E1211 14:14:05.414877 4924 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:05 crc kubenswrapper[4924]: E1211 14:14:05.414965 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls podName:49243203-c21f-40a7-b19b-da1632d7ede1 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:07.414943006 +0000 UTC m=+1260.924423983 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "49243203-c21f-40a7-b19b-da1632d7ede1") : secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:07 crc kubenswrapper[4924]: I1211 14:14:07.406200 4924 generic.go:334] "Generic (PLEG): container finished" podID="b503f5e8-7bbc-48a3-aed9-83ebaebbab33" containerID="aa17491587a066ed0e62de7a5b119193c2a2de591f5ef0ab61c23ea7db556085" exitCode=0 Dec 11 14:14:07 crc kubenswrapper[4924]: I1211 14:14:07.406518 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"b503f5e8-7bbc-48a3-aed9-83ebaebbab33","Type":"ContainerDied","Data":"aa17491587a066ed0e62de7a5b119193c2a2de591f5ef0ab61c23ea7db556085"} Dec 11 14:14:07 crc kubenswrapper[4924]: I1211 14:14:07.448233 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:07 crc kubenswrapper[4924]: E1211 14:14:07.448408 4924 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:07 crc kubenswrapper[4924]: E1211 14:14:07.448646 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls podName:49243203-c21f-40a7-b19b-da1632d7ede1 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:11.448456913 +0000 UTC m=+1264.957937890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "49243203-c21f-40a7-b19b-da1632d7ede1") : secret "default-alertmanager-proxy-tls" not found Dec 11 14:14:11 crc kubenswrapper[4924]: I1211 14:14:11.435594 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" event={"ID":"4d1f4a5b-ce7c-4386-ba50-c36ff3de3686","Type":"ContainerStarted","Data":"5ef1a4a5d2acfc076a6745e7aaa5a9b49a7a18c14b1ef1c4ce7c9bec2e28b9b4"} Dec 11 14:14:11 crc kubenswrapper[4924]: I1211 14:14:11.464977 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-78bcbbdcff-9nvf4" podStartSLOduration=2.593401836 podStartE2EDuration="12.464943333s" podCreationTimestamp="2025-12-11 14:13:59 +0000 UTC" firstStartedPulling="2025-12-11 14:14:00.583613151 +0000 UTC m=+1254.093094138" lastFinishedPulling="2025-12-11 14:14:10.455154658 +0000 UTC m=+1263.964635635" observedRunningTime="2025-12-11 14:14:11.454775403 +0000 UTC m=+1264.964256380" watchObservedRunningTime="2025-12-11 14:14:11.464943333 +0000 UTC m=+1264.974424310" Dec 11 14:14:11 crc kubenswrapper[4924]: I1211 14:14:11.467847 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:11 crc kubenswrapper[4924]: I1211 14:14:11.493144 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/49243203-c21f-40a7-b19b-da1632d7ede1-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"49243203-c21f-40a7-b19b-da1632d7ede1\") " pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:11 crc kubenswrapper[4924]: I1211 14:14:11.754434 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-bprvl" Dec 11 14:14:11 crc kubenswrapper[4924]: I1211 14:14:11.763994 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 11 14:14:22 crc kubenswrapper[4924]: I1211 14:14:22.630928 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 11 14:14:22 crc kubenswrapper[4924]: W1211 14:14:22.704294 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49243203_c21f_40a7_b19b_da1632d7ede1.slice/crio-60bdbe3f5451d758f94fd442adb7272b242a0a75a9eb07d48a6381dd5fa258b2 WatchSource:0}: Error finding container 60bdbe3f5451d758f94fd442adb7272b242a0a75a9eb07d48a6381dd5fa258b2: Status 404 returned error can't find the container with id 60bdbe3f5451d758f94fd442adb7272b242a0a75a9eb07d48a6381dd5fa258b2 Dec 11 14:14:23 crc kubenswrapper[4924]: I1211 14:14:23.039276 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"b503f5e8-7bbc-48a3-aed9-83ebaebbab33","Type":"ContainerStarted","Data":"849e449d224775d110701f9f913e6f55801c4e3c4043f5930127ccee4811f142"} Dec 11 14:14:23 crc kubenswrapper[4924]: I1211 14:14:23.040869 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"49243203-c21f-40a7-b19b-da1632d7ede1","Type":"ContainerStarted","Data":"60bdbe3f5451d758f94fd442adb7272b242a0a75a9eb07d48a6381dd5fa258b2"} Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.055355 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"b503f5e8-7bbc-48a3-aed9-83ebaebbab33","Type":"ContainerStarted","Data":"825b912210e32750c851ba79e1a4540a40c2fcb3389dbc3b178a0630802d01d5"} Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.072377 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"49243203-c21f-40a7-b19b-da1632d7ede1","Type":"ContainerStarted","Data":"a908ef9f05ee9e32c799d2cb12a75d293a4c0703a63429071ff386b02fab9837"} Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.822495 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl"] Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.823903 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.826772 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.827738 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.827900 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-4lkgl" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.832360 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.833824 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl"] Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.993145 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.993403 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.993457 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.993545 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:25 crc kubenswrapper[4924]: I1211 14:14:25.993674 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nklqm\" (UniqueName: \"kubernetes.io/projected/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-kube-api-access-nklqm\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.095029 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.095095 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.095113 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.095132 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.095153 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nklqm\" (UniqueName: \"kubernetes.io/projected/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-kube-api-access-nklqm\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: E1211 14:14:26.095338 4924 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 11 14:14:26 crc kubenswrapper[4924]: E1211 14:14:26.095421 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls podName:8f5e3b53-98c3-40f8-8376-d0f308b68f7b nodeName:}" failed. No retries permitted until 2025-12-11 14:14:26.595399537 +0000 UTC m=+1280.104880514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" (UID: "8f5e3b53-98c3-40f8-8376-d0f308b68f7b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.096283 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.096604 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.110217 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.116498 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nklqm\" (UniqueName: \"kubernetes.io/projected/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-kube-api-access-nklqm\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.603526 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:26 crc kubenswrapper[4924]: E1211 14:14:26.603702 4924 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 11 14:14:26 crc kubenswrapper[4924]: E1211 14:14:26.603884 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls podName:8f5e3b53-98c3-40f8-8376-d0f308b68f7b nodeName:}" failed. No retries permitted until 2025-12-11 14:14:27.603852026 +0000 UTC m=+1281.113333003 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" (UID: "8f5e3b53-98c3-40f8-8376-d0f308b68f7b") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.691400 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z"] Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.692858 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.694670 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.701453 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z"] Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.707416 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.807716 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/190d48f1-a870-406d-9bbc-b831a22ac215-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.807776 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8bc\" (UniqueName: \"kubernetes.io/projected/190d48f1-a870-406d-9bbc-b831a22ac215-kube-api-access-tm8bc\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.807803 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.808050 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/190d48f1-a870-406d-9bbc-b831a22ac215-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.808110 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.909395 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/190d48f1-a870-406d-9bbc-b831a22ac215-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.909517 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm8bc\" (UniqueName: \"kubernetes.io/projected/190d48f1-a870-406d-9bbc-b831a22ac215-kube-api-access-tm8bc\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.909557 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.909776 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/190d48f1-a870-406d-9bbc-b831a22ac215-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.909826 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: E1211 14:14:26.909837 4924 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 11 14:14:26 crc kubenswrapper[4924]: E1211 14:14:26.909918 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls podName:190d48f1-a870-406d-9bbc-b831a22ac215 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:27.409896021 +0000 UTC m=+1280.919377088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" (UID: "190d48f1-a870-406d-9bbc-b831a22ac215") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.910066 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/190d48f1-a870-406d-9bbc-b831a22ac215-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.911861 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/190d48f1-a870-406d-9bbc-b831a22ac215-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.917966 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:26 crc kubenswrapper[4924]: I1211 14:14:26.925500 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm8bc\" (UniqueName: \"kubernetes.io/projected/190d48f1-a870-406d-9bbc-b831a22ac215-kube-api-access-tm8bc\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:27 crc kubenswrapper[4924]: I1211 14:14:27.415745 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:27 crc kubenswrapper[4924]: E1211 14:14:27.415958 4924 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 11 14:14:27 crc kubenswrapper[4924]: E1211 14:14:27.416044 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls podName:190d48f1-a870-406d-9bbc-b831a22ac215 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:28.416025454 +0000 UTC m=+1281.925506431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" (UID: "190d48f1-a870-406d-9bbc-b831a22ac215") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 11 14:14:27 crc kubenswrapper[4924]: I1211 14:14:27.619369 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:27 crc kubenswrapper[4924]: I1211 14:14:27.624888 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/8f5e3b53-98c3-40f8-8376-d0f308b68f7b-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl\" (UID: \"8f5e3b53-98c3-40f8-8376-d0f308b68f7b\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:27 crc kubenswrapper[4924]: I1211 14:14:27.641128 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" Dec 11 14:14:28 crc kubenswrapper[4924]: I1211 14:14:28.034156 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl"] Dec 11 14:14:28 crc kubenswrapper[4924]: I1211 14:14:28.515893 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:28 crc kubenswrapper[4924]: I1211 14:14:28.521170 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/190d48f1-a870-406d-9bbc-b831a22ac215-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z\" (UID: \"190d48f1-a870-406d-9bbc-b831a22ac215\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:28 crc kubenswrapper[4924]: I1211 14:14:28.813423 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.102885 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s"] Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.104322 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.107066 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.107242 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.109089 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s"] Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.329793 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/745d94ca-bcf1-48fc-b39b-0dbb960de581-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.329905 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.329932 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/745d94ca-bcf1-48fc-b39b-0dbb960de581-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.329965 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.330158 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l55kc\" (UniqueName: \"kubernetes.io/projected/745d94ca-bcf1-48fc-b39b-0dbb960de581-kube-api-access-l55kc\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.431860 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l55kc\" (UniqueName: \"kubernetes.io/projected/745d94ca-bcf1-48fc-b39b-0dbb960de581-kube-api-access-l55kc\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.431938 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/745d94ca-bcf1-48fc-b39b-0dbb960de581-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.431961 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.431978 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/745d94ca-bcf1-48fc-b39b-0dbb960de581-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.432001 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.433461 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/745d94ca-bcf1-48fc-b39b-0dbb960de581-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: E1211 14:14:30.433858 4924 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 11 14:14:30 crc kubenswrapper[4924]: E1211 14:14:30.433904 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls podName:745d94ca-bcf1-48fc-b39b-0dbb960de581 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:30.933889639 +0000 UTC m=+1284.443370616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" (UID: "745d94ca-bcf1-48fc-b39b-0dbb960de581") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.434807 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/745d94ca-bcf1-48fc-b39b-0dbb960de581-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.445895 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.486795 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l55kc\" (UniqueName: \"kubernetes.io/projected/745d94ca-bcf1-48fc-b39b-0dbb960de581-kube-api-access-l55kc\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: I1211 14:14:30.937797 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:30 crc kubenswrapper[4924]: E1211 14:14:30.937947 4924 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 11 14:14:30 crc kubenswrapper[4924]: E1211 14:14:30.938192 4924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls podName:745d94ca-bcf1-48fc-b39b-0dbb960de581 nodeName:}" failed. No retries permitted until 2025-12-11 14:14:31.938154768 +0000 UTC m=+1285.447635745 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" (UID: "745d94ca-bcf1-48fc-b39b-0dbb960de581") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 11 14:14:31 crc kubenswrapper[4924]: I1211 14:14:31.159681 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerStarted","Data":"3f8fa96b342b418d9ec7d777b37c2f22b28d02f89f874582e8d40708eb3ec1f9"} Dec 11 14:14:31 crc kubenswrapper[4924]: I1211 14:14:31.953523 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:31 crc kubenswrapper[4924]: I1211 14:14:31.977256 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/745d94ca-bcf1-48fc-b39b-0dbb960de581-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s\" (UID: \"745d94ca-bcf1-48fc-b39b-0dbb960de581\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:31 crc kubenswrapper[4924]: I1211 14:14:31.979835 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" Dec 11 14:14:33 crc kubenswrapper[4924]: I1211 14:14:33.192054 4924 generic.go:334] "Generic (PLEG): container finished" podID="49243203-c21f-40a7-b19b-da1632d7ede1" containerID="a908ef9f05ee9e32c799d2cb12a75d293a4c0703a63429071ff386b02fab9837" exitCode=0 Dec 11 14:14:33 crc kubenswrapper[4924]: I1211 14:14:33.192214 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"49243203-c21f-40a7-b19b-da1632d7ede1","Type":"ContainerDied","Data":"a908ef9f05ee9e32c799d2cb12a75d293a4c0703a63429071ff386b02fab9837"} Dec 11 14:14:33 crc kubenswrapper[4924]: I1211 14:14:33.447623 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s"] Dec 11 14:14:33 crc kubenswrapper[4924]: W1211 14:14:33.463549 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod745d94ca_bcf1_48fc_b39b_0dbb960de581.slice/crio-91b90487c958c42ab5d434d0e097a8e8bae78bcf29d814689b9dd5a4b443031e WatchSource:0}: Error finding container 91b90487c958c42ab5d434d0e097a8e8bae78bcf29d814689b9dd5a4b443031e: Status 404 returned error can't find the container with id 91b90487c958c42ab5d434d0e097a8e8bae78bcf29d814689b9dd5a4b443031e Dec 11 14:14:33 crc kubenswrapper[4924]: W1211 14:14:33.540921 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod190d48f1_a870_406d_9bbc_b831a22ac215.slice/crio-a266a0fba280d92494161916666c800e299931fe485354ff90a1752767a62649 WatchSource:0}: Error finding container a266a0fba280d92494161916666c800e299931fe485354ff90a1752767a62649: Status 404 returned error can't find the container with id a266a0fba280d92494161916666c800e299931fe485354ff90a1752767a62649 Dec 11 14:14:33 crc kubenswrapper[4924]: I1211 14:14:33.543243 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z"] Dec 11 14:14:34 crc kubenswrapper[4924]: I1211 14:14:34.201464 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"b503f5e8-7bbc-48a3-aed9-83ebaebbab33","Type":"ContainerStarted","Data":"63ec2cdee86e81b9e3b5c7cb974d3cb0a99aa476a3a6743d53e77f5a10549336"} Dec 11 14:14:34 crc kubenswrapper[4924]: I1211 14:14:34.203479 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerStarted","Data":"91b90487c958c42ab5d434d0e097a8e8bae78bcf29d814689b9dd5a4b443031e"} Dec 11 14:14:34 crc kubenswrapper[4924]: I1211 14:14:34.204724 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerStarted","Data":"e8ba2b0a3ca8380c31ee4bd230d07a7ede24906023947da81e99aa1073753d20"} Dec 11 14:14:34 crc kubenswrapper[4924]: I1211 14:14:34.208568 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerStarted","Data":"a266a0fba280d92494161916666c800e299931fe485354ff90a1752767a62649"} Dec 11 14:14:34 crc kubenswrapper[4924]: I1211 14:14:34.225269 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.835210851 podStartE2EDuration="46.225252367s" podCreationTimestamp="2025-12-11 14:13:48 +0000 UTC" firstStartedPulling="2025-12-11 14:13:51.827358201 +0000 UTC m=+1245.336839168" lastFinishedPulling="2025-12-11 14:14:33.217399707 +0000 UTC m=+1286.726880684" observedRunningTime="2025-12-11 14:14:34.220065229 +0000 UTC m=+1287.729546206" watchObservedRunningTime="2025-12-11 14:14:34.225252367 +0000 UTC m=+1287.734733344" Dec 11 14:14:35 crc kubenswrapper[4924]: I1211 14:14:35.218354 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerStarted","Data":"5507a65a68607abe147ba0f0273d2ad7ce67a9a707f143fe217620d383c148ad"} Dec 11 14:14:36 crc kubenswrapper[4924]: I1211 14:14:36.234977 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"49243203-c21f-40a7-b19b-da1632d7ede1","Type":"ContainerStarted","Data":"56ee763860314844a5c59a4ff8d763bf177cfc7c6cd557880d3286a2f09d67a4"} Dec 11 14:14:36 crc kubenswrapper[4924]: I1211 14:14:36.237412 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerStarted","Data":"edb9c192b2f2b6e03cd29082086f24be5badddf917bcb74e5fe6c378b9ffffa6"} Dec 11 14:14:36 crc kubenswrapper[4924]: I1211 14:14:36.620829 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 11 14:14:36 crc kubenswrapper[4924]: I1211 14:14:36.620888 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Dec 11 14:14:36 crc kubenswrapper[4924]: I1211 14:14:36.675049 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 11 14:14:37 crc kubenswrapper[4924]: I1211 14:14:37.354195 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 11 14:14:38 crc kubenswrapper[4924]: I1211 14:14:38.253258 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"49243203-c21f-40a7-b19b-da1632d7ede1","Type":"ContainerStarted","Data":"1aaca966d694dc4cd689885fc7c84bcff27ba2f70006517006ac261b89b4117d"} Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.699706 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh"] Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.701219 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.704785 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.705345 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.709182 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh"] Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.853388 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/533c2af6-100b-49d2-b06f-8fd5c6754220-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.853466 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/533c2af6-100b-49d2-b06f-8fd5c6754220-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.853511 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmlgm\" (UniqueName: \"kubernetes.io/projected/533c2af6-100b-49d2-b06f-8fd5c6754220-kube-api-access-vmlgm\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.854031 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/533c2af6-100b-49d2-b06f-8fd5c6754220-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.955875 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/533c2af6-100b-49d2-b06f-8fd5c6754220-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.955965 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/533c2af6-100b-49d2-b06f-8fd5c6754220-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.956028 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmlgm\" (UniqueName: \"kubernetes.io/projected/533c2af6-100b-49d2-b06f-8fd5c6754220-kube-api-access-vmlgm\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.956085 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/533c2af6-100b-49d2-b06f-8fd5c6754220-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.957152 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/533c2af6-100b-49d2-b06f-8fd5c6754220-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.972642 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/533c2af6-100b-49d2-b06f-8fd5c6754220-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.982087 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmlgm\" (UniqueName: \"kubernetes.io/projected/533c2af6-100b-49d2-b06f-8fd5c6754220-kube-api-access-vmlgm\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:39 crc kubenswrapper[4924]: I1211 14:14:39.992787 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/533c2af6-100b-49d2-b06f-8fd5c6754220-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh\" (UID: \"533c2af6-100b-49d2-b06f-8fd5c6754220\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.033885 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.760005 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt"] Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.761549 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.763978 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.776816 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt"] Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.783568 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.783727 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.783794 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljmps\" (UniqueName: \"kubernetes.io/projected/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-kube-api-access-ljmps\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.783902 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.885229 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.885287 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.885310 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljmps\" (UniqueName: \"kubernetes.io/projected/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-kube-api-access-ljmps\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.885375 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.891003 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.894056 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.894373 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:40 crc kubenswrapper[4924]: I1211 14:14:40.918707 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljmps\" (UniqueName: \"kubernetes.io/projected/63fa29c1-05fc-4660-a6d6-8c59e7fa0a63-kube-api-access-ljmps\") pod \"default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt\" (UID: \"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:41 crc kubenswrapper[4924]: I1211 14:14:41.089818 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" Dec 11 14:14:45 crc kubenswrapper[4924]: I1211 14:14:45.991383 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt"] Dec 11 14:14:45 crc kubenswrapper[4924]: W1211 14:14:45.997346 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63fa29c1_05fc_4660_a6d6_8c59e7fa0a63.slice/crio-d6eea0baa5204b1bb5892a72de1f48287710bec8303cda8fa9328bef6900753d WatchSource:0}: Error finding container d6eea0baa5204b1bb5892a72de1f48287710bec8303cda8fa9328bef6900753d: Status 404 returned error can't find the container with id d6eea0baa5204b1bb5892a72de1f48287710bec8303cda8fa9328bef6900753d Dec 11 14:14:46 crc kubenswrapper[4924]: I1211 14:14:46.130210 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh"] Dec 11 14:14:46 crc kubenswrapper[4924]: W1211 14:14:46.137482 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod533c2af6_100b_49d2_b06f_8fd5c6754220.slice/crio-36e22e87af88788385910d87667f4492db7fe3714192f935dddf1dde5f5bb156 WatchSource:0}: Error finding container 36e22e87af88788385910d87667f4492db7fe3714192f935dddf1dde5f5bb156: Status 404 returned error can't find the container with id 36e22e87af88788385910d87667f4492db7fe3714192f935dddf1dde5f5bb156 Dec 11 14:14:46 crc kubenswrapper[4924]: I1211 14:14:46.418391 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerStarted","Data":"d6eea0baa5204b1bb5892a72de1f48287710bec8303cda8fa9328bef6900753d"} Dec 11 14:14:46 crc kubenswrapper[4924]: I1211 14:14:46.419434 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerStarted","Data":"36e22e87af88788385910d87667f4492db7fe3714192f935dddf1dde5f5bb156"} Dec 11 14:14:47 crc kubenswrapper[4924]: I1211 14:14:47.431635 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerStarted","Data":"932146805b23fd538d1dbb8718bb463853c292927c2c1f911d7002f1be1124fb"} Dec 11 14:14:47 crc kubenswrapper[4924]: I1211 14:14:47.434362 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"49243203-c21f-40a7-b19b-da1632d7ede1","Type":"ContainerStarted","Data":"f02d0db7ce2c43845f4f4cedfbb284f7363060259a8d999a0d8bde6e665d886a"} Dec 11 14:14:47 crc kubenswrapper[4924]: I1211 14:14:47.436491 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerStarted","Data":"81f18239f0f19324b394ef83ca02753436ed9e449cd3e0a81236c78eaf4ff979"} Dec 11 14:14:47 crc kubenswrapper[4924]: I1211 14:14:47.438466 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerStarted","Data":"f3fd0eeba7173ae17f623709368c9442af0f0414a5439caae10e4de48bb30d0e"} Dec 11 14:14:48 crc kubenswrapper[4924]: I1211 14:14:48.447031 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerStarted","Data":"b5192db521ce8fff79f96a90d44ad71cc72d85e86f838b2a68f839c4bc6de6d6"} Dec 11 14:14:48 crc kubenswrapper[4924]: I1211 14:14:48.448447 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerStarted","Data":"ba00b9a9226b599d9040118b90703e81eaeaf85e8614ea47b8ec565d0fdea74a"} Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.135874 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=44.428105551 podStartE2EDuration="58.135853009s" podCreationTimestamp="2025-12-11 14:14:02 +0000 UTC" firstStartedPulling="2025-12-11 14:14:33.196270193 +0000 UTC m=+1286.705751170" lastFinishedPulling="2025-12-11 14:14:46.904017651 +0000 UTC m=+1300.413498628" observedRunningTime="2025-12-11 14:14:47.471595137 +0000 UTC m=+1300.981076114" watchObservedRunningTime="2025-12-11 14:15:00.135853009 +0000 UTC m=+1313.645333986" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.153639 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h"] Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.154936 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h"] Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.155149 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.197601 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.198056 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.301166 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-secret-volume\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.301223 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpn4\" (UniqueName: \"kubernetes.io/projected/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-kube-api-access-skpn4\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.301269 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-config-volume\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.402663 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-secret-volume\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.402719 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpn4\" (UniqueName: \"kubernetes.io/projected/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-kube-api-access-skpn4\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.402766 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-config-volume\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.403675 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-config-volume\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.408885 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-secret-volume\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.419014 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpn4\" (UniqueName: \"kubernetes.io/projected/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-kube-api-access-skpn4\") pod \"collect-profiles-29424375-nvs4h\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.455013 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-w8jbd"] Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.455510 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" podUID="5b61d340-0e65-430b-a47b-d874700a8641" containerName="default-interconnect" containerID="cri-o://6d7208de1b24a25dfe545e9abfc24b74aa1ade3bf11ba375c9533a3d10d5bda6" gracePeriod=30 Dec 11 14:15:00 crc kubenswrapper[4924]: I1211 14:15:00.524911 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:01 crc kubenswrapper[4924]: I1211 14:15:01.898823 4924 generic.go:334] "Generic (PLEG): container finished" podID="533c2af6-100b-49d2-b06f-8fd5c6754220" containerID="b5192db521ce8fff79f96a90d44ad71cc72d85e86f838b2a68f839c4bc6de6d6" exitCode=0 Dec 11 14:15:01 crc kubenswrapper[4924]: I1211 14:15:01.898957 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerDied","Data":"b5192db521ce8fff79f96a90d44ad71cc72d85e86f838b2a68f839c4bc6de6d6"} Dec 11 14:15:01 crc kubenswrapper[4924]: I1211 14:15:01.900317 4924 generic.go:334] "Generic (PLEG): container finished" podID="5b61d340-0e65-430b-a47b-d874700a8641" containerID="6d7208de1b24a25dfe545e9abfc24b74aa1ade3bf11ba375c9533a3d10d5bda6" exitCode=0 Dec 11 14:15:01 crc kubenswrapper[4924]: I1211 14:15:01.900393 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" event={"ID":"5b61d340-0e65-430b-a47b-d874700a8641","Type":"ContainerDied","Data":"6d7208de1b24a25dfe545e9abfc24b74aa1ade3bf11ba375c9533a3d10d5bda6"} Dec 11 14:15:01 crc kubenswrapper[4924]: I1211 14:15:01.901674 4924 generic.go:334] "Generic (PLEG): container finished" podID="63fa29c1-05fc-4660-a6d6-8c59e7fa0a63" containerID="ba00b9a9226b599d9040118b90703e81eaeaf85e8614ea47b8ec565d0fdea74a" exitCode=0 Dec 11 14:15:01 crc kubenswrapper[4924]: I1211 14:15:01.901711 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerDied","Data":"ba00b9a9226b599d9040118b90703e81eaeaf85e8614ea47b8ec565d0fdea74a"} Dec 11 14:15:03 crc kubenswrapper[4924]: I1211 14:15:03.917523 4924 generic.go:334] "Generic (PLEG): container finished" podID="190d48f1-a870-406d-9bbc-b831a22ac215" containerID="932146805b23fd538d1dbb8718bb463853c292927c2c1f911d7002f1be1124fb" exitCode=0 Dec 11 14:15:03 crc kubenswrapper[4924]: I1211 14:15:03.917585 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerDied","Data":"932146805b23fd538d1dbb8718bb463853c292927c2c1f911d7002f1be1124fb"} Dec 11 14:15:03 crc kubenswrapper[4924]: I1211 14:15:03.920144 4924 generic.go:334] "Generic (PLEG): container finished" podID="745d94ca-bcf1-48fc-b39b-0dbb960de581" containerID="81f18239f0f19324b394ef83ca02753436ed9e449cd3e0a81236c78eaf4ff979" exitCode=0 Dec 11 14:15:03 crc kubenswrapper[4924]: I1211 14:15:03.920201 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerDied","Data":"81f18239f0f19324b394ef83ca02753436ed9e449cd3e0a81236c78eaf4ff979"} Dec 11 14:15:03 crc kubenswrapper[4924]: I1211 14:15:03.922202 4924 generic.go:334] "Generic (PLEG): container finished" podID="8f5e3b53-98c3-40f8-8376-d0f308b68f7b" containerID="f3fd0eeba7173ae17f623709368c9442af0f0414a5439caae10e4de48bb30d0e" exitCode=0 Dec 11 14:15:03 crc kubenswrapper[4924]: I1211 14:15:03.922229 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerDied","Data":"f3fd0eeba7173ae17f623709368c9442af0f0414a5439caae10e4de48bb30d0e"} Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.067978 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/sg-core:latest" Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.068502 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/infrawatch/sg-core:latest,Command:[],Args:[-config /etc/sg-core/sg-core.conf.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MY_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sg-core-config,ReadOnly:true,MountPath:/etc/sg-core/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-certs,ReadOnly:false,MountPath:/config/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmlgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh_service-telemetry(533c2af6-100b-49d2-b06f-8fd5c6754220): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.070967 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" podUID="533c2af6-100b-49d2-b06f-8fd5c6754220" Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.081556 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/sg-core:latest" Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.081717 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/infrawatch/sg-core:latest,Command:[],Args:[-config /etc/sg-core/sg-core.conf.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MY_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sg-core-config,ReadOnly:true,MountPath:/etc/sg-core/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-certs,ReadOnly:false,MountPath:/config/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljmps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt_service-telemetry(63fa29c1-05fc-4660-a6d6-8c59e7fa0a63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.082909 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" podUID="63fa29c1-05fc-4660-a6d6-8c59e7fa0a63" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.500832 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h"] Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.807217 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.844511 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qmgqf"] Dec 11 14:15:05 crc kubenswrapper[4924]: E1211 14:15:05.844784 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b61d340-0e65-430b-a47b-d874700a8641" containerName="default-interconnect" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.844802 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b61d340-0e65-430b-a47b-d874700a8641" containerName="default-interconnect" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.844927 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b61d340-0e65-430b-a47b-d874700a8641" containerName="default-interconnect" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.845398 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.850740 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qmgqf"] Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.935946 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" event={"ID":"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6","Type":"ContainerStarted","Data":"964b56f682c848f092928f13e49bf51bba25ed336fc079b9ac99c3e618b1f458"} Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.937509 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.937501 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-w8jbd" event={"ID":"5b61d340-0e65-430b-a47b-d874700a8641","Type":"ContainerDied","Data":"c26bcd7684d0ddc34e79ffb51802ce96701618689c27cda8afd9985366f980d8"} Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.937549 4924 scope.go:117] "RemoveContainer" containerID="6d7208de1b24a25dfe545e9abfc24b74aa1ade3bf11ba375c9533a3d10d5bda6" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.937825 4924 scope.go:117] "RemoveContainer" containerID="b5192db521ce8fff79f96a90d44ad71cc72d85e86f838b2a68f839c4bc6de6d6" Dec 11 14:15:05 crc kubenswrapper[4924]: I1211 14:15:05.938712 4924 scope.go:117] "RemoveContainer" containerID="ba00b9a9226b599d9040118b90703e81eaeaf85e8614ea47b8ec565d0fdea74a" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.000056 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-credentials\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.001472 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-credentials\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.001574 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-sasl-users\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.001707 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-ca\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.001821 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8hss\" (UniqueName: \"kubernetes.io/projected/5b61d340-0e65-430b-a47b-d874700a8641-kube-api-access-z8hss\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.001949 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-ca\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002098 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b61d340-0e65-430b-a47b-d874700a8641-sasl-config\") pod \"5b61d340-0e65-430b-a47b-d874700a8641\" (UID: \"5b61d340-0e65-430b-a47b-d874700a8641\") " Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002392 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002526 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002633 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002726 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002866 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-sasl-users\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.002972 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn7hd\" (UniqueName: \"kubernetes.io/projected/13310010-42ff-4473-b29e-413053a6a8f8-kube-api-access-hn7hd\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.003088 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/13310010-42ff-4473-b29e-413053a6a8f8-sasl-config\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.004763 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b61d340-0e65-430b-a47b-d874700a8641-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.011198 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.011545 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.014380 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.015057 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b61d340-0e65-430b-a47b-d874700a8641-kube-api-access-z8hss" (OuterVolumeSpecName: "kube-api-access-z8hss") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "kube-api-access-z8hss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.015224 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.017173 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "5b61d340-0e65-430b-a47b-d874700a8641" (UID: "5b61d340-0e65-430b-a47b-d874700a8641"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104660 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104721 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104755 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104795 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-sasl-users\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104818 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn7hd\" (UniqueName: \"kubernetes.io/projected/13310010-42ff-4473-b29e-413053a6a8f8-kube-api-access-hn7hd\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104846 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/13310010-42ff-4473-b29e-413053a6a8f8-sasl-config\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104905 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104966 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8hss\" (UniqueName: \"kubernetes.io/projected/5b61d340-0e65-430b-a47b-d874700a8641-kube-api-access-z8hss\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104976 4924 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104987 4924 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b61d340-0e65-430b-a47b-d874700a8641-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.104998 4924 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.105007 4924 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.105018 4924 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.105027 4924 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b61d340-0e65-430b-a47b-d874700a8641-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.106486 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/13310010-42ff-4473-b29e-413053a6a8f8-sasl-config\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.108758 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.112277 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.118580 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.125408 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-sasl-users\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.125828 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn7hd\" (UniqueName: \"kubernetes.io/projected/13310010-42ff-4473-b29e-413053a6a8f8-kube-api-access-hn7hd\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.126707 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/13310010-42ff-4473-b29e-413053a6a8f8-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qmgqf\" (UID: \"13310010-42ff-4473-b29e-413053a6a8f8\") " pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.164789 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.275813 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-w8jbd"] Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.280837 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-w8jbd"] Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.413457 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qmgqf"] Dec 11 14:15:06 crc kubenswrapper[4924]: W1211 14:15:06.416808 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13310010_42ff_4473_b29e_413053a6a8f8.slice/crio-b3a0a3f448e4625f23411a502b71f67b7aac4edf72a5d96b76ed999561547633 WatchSource:0}: Error finding container b3a0a3f448e4625f23411a502b71f67b7aac4edf72a5d96b76ed999561547633: Status 404 returned error can't find the container with id b3a0a3f448e4625f23411a502b71f67b7aac4edf72a5d96b76ed999561547633 Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.802726 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b61d340-0e65-430b-a47b-d874700a8641" path="/var/lib/kubelet/pods/5b61d340-0e65-430b-a47b-d874700a8641/volumes" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.947067 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerStarted","Data":"a6904b6891123d6cfeecd3b9778c6b4a892c0e268856cf9346919b1ae9c78589"} Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.947554 4924 scope.go:117] "RemoveContainer" containerID="932146805b23fd538d1dbb8718bb463853c292927c2c1f911d7002f1be1124fb" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.948709 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" event={"ID":"13310010-42ff-4473-b29e-413053a6a8f8","Type":"ContainerStarted","Data":"b207561130f15c13cf415bf405d7f4b4042f06e3be65572ca51f63f7326ba8ac"} Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.948925 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" event={"ID":"13310010-42ff-4473-b29e-413053a6a8f8","Type":"ContainerStarted","Data":"b3a0a3f448e4625f23411a502b71f67b7aac4edf72a5d96b76ed999561547633"} Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.950017 4924 generic.go:334] "Generic (PLEG): container finished" podID="eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" containerID="bde1a25a75197dbd032a337abd6b78c90b02ea5f8d7e6a428e19e4fa4fd6279d" exitCode=0 Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.950113 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" event={"ID":"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6","Type":"ContainerDied","Data":"bde1a25a75197dbd032a337abd6b78c90b02ea5f8d7e6a428e19e4fa4fd6279d"} Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.952640 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerStarted","Data":"959d3f1c9212ba262664bde2f3533d4517f9b9e383f42061608a0f8564484b90"} Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.952861 4924 scope.go:117] "RemoveContainer" containerID="81f18239f0f19324b394ef83ca02753436ed9e449cd3e0a81236c78eaf4ff979" Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.954724 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerStarted","Data":"928f58d048e9abb3f7ce367a06d574e84ed00eab5c30368040b1d59ffd17d0bc"} Dec 11 14:15:06 crc kubenswrapper[4924]: I1211 14:15:06.955214 4924 scope.go:117] "RemoveContainer" containerID="f3fd0eeba7173ae17f623709368c9442af0f0414a5439caae10e4de48bb30d0e" Dec 11 14:15:07 crc kubenswrapper[4924]: I1211 14:15:07.033372 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-qmgqf" podStartSLOduration=7.033349309 podStartE2EDuration="7.033349309s" podCreationTimestamp="2025-12-11 14:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:15:07.029387905 +0000 UTC m=+1320.538868892" watchObservedRunningTime="2025-12-11 14:15:07.033349309 +0000 UTC m=+1320.542830286" Dec 11 14:15:07 crc kubenswrapper[4924]: E1211 14:15:07.529453 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" podUID="533c2af6-100b-49d2-b06f-8fd5c6754220" Dec 11 14:15:07 crc kubenswrapper[4924]: E1211 14:15:07.531096 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" podUID="63fa29c1-05fc-4660-a6d6-8c59e7fa0a63" Dec 11 14:15:07 crc kubenswrapper[4924]: I1211 14:15:07.963522 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerStarted","Data":"135c047ba1fa8cf37a7ead20d48ad6a613ac0079196606252364c9e25a679062"} Dec 11 14:15:07 crc kubenswrapper[4924]: E1211 14:15:07.967477 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" podUID="533c2af6-100b-49d2-b06f-8fd5c6754220" Dec 11 14:15:07 crc kubenswrapper[4924]: I1211 14:15:07.969066 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerStarted","Data":"72155f960c1309c567f35b6328d0647626915f9d0eded09f0eb639f7846d619a"} Dec 11 14:15:07 crc kubenswrapper[4924]: I1211 14:15:07.977201 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerStarted","Data":"5c9ca2e79558716d5db423556d65349b73f770bd3b3531b30dbf55a189d42851"} Dec 11 14:15:07 crc kubenswrapper[4924]: I1211 14:15:07.979457 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerStarted","Data":"4d3e1c27f759e7a5f35b8bb9de24808375ffb18bd5f2d44163e786f244e27fbd"} Dec 11 14:15:07 crc kubenswrapper[4924]: E1211 14:15:07.983090 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" podUID="63fa29c1-05fc-4660-a6d6-8c59e7fa0a63" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.009537 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" podStartSLOduration=6.3915695249999995 podStartE2EDuration="43.009519946s" podCreationTimestamp="2025-12-11 14:14:25 +0000 UTC" firstStartedPulling="2025-12-11 14:14:30.911847637 +0000 UTC m=+1284.421328614" lastFinishedPulling="2025-12-11 14:15:07.529798058 +0000 UTC m=+1321.039279035" observedRunningTime="2025-12-11 14:15:07.997587223 +0000 UTC m=+1321.507068200" watchObservedRunningTime="2025-12-11 14:15:08.009519946 +0000 UTC m=+1321.519000923" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.028424 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" podStartSLOduration=3.74367611 podStartE2EDuration="38.0284055s" podCreationTimestamp="2025-12-11 14:14:30 +0000 UTC" firstStartedPulling="2025-12-11 14:14:33.481986798 +0000 UTC m=+1286.991467765" lastFinishedPulling="2025-12-11 14:15:07.766716178 +0000 UTC m=+1321.276197155" observedRunningTime="2025-12-11 14:15:08.018430753 +0000 UTC m=+1321.527911730" watchObservedRunningTime="2025-12-11 14:15:08.0284055 +0000 UTC m=+1321.537886477" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.293601 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.447194 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skpn4\" (UniqueName: \"kubernetes.io/projected/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-kube-api-access-skpn4\") pod \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.447281 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-secret-volume\") pod \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.447457 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-config-volume\") pod \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\" (UID: \"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6\") " Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.448229 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-config-volume" (OuterVolumeSpecName: "config-volume") pod "eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" (UID: "eeddf69f-0c5b-4516-b6cb-0d26f32d14c6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.455627 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" (UID: "eeddf69f-0c5b-4516-b6cb-0d26f32d14c6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.457519 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-kube-api-access-skpn4" (OuterVolumeSpecName: "kube-api-access-skpn4") pod "eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" (UID: "eeddf69f-0c5b-4516-b6cb-0d26f32d14c6"). InnerVolumeSpecName "kube-api-access-skpn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.549415 4924 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.549482 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skpn4\" (UniqueName: \"kubernetes.io/projected/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-kube-api-access-skpn4\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.549495 4924 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddf69f-0c5b-4516-b6cb-0d26f32d14c6-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.994420 4924 generic.go:334] "Generic (PLEG): container finished" podID="533c2af6-100b-49d2-b06f-8fd5c6754220" containerID="135c047ba1fa8cf37a7ead20d48ad6a613ac0079196606252364c9e25a679062" exitCode=0 Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.994497 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerDied","Data":"135c047ba1fa8cf37a7ead20d48ad6a613ac0079196606252364c9e25a679062"} Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.994835 4924 scope.go:117] "RemoveContainer" containerID="b5192db521ce8fff79f96a90d44ad71cc72d85e86f838b2a68f839c4bc6de6d6" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.995416 4924 scope.go:117] "RemoveContainer" containerID="135c047ba1fa8cf37a7ead20d48ad6a613ac0079196606252364c9e25a679062" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.997431 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" event={"ID":"eeddf69f-0c5b-4516-b6cb-0d26f32d14c6","Type":"ContainerDied","Data":"964b56f682c848f092928f13e49bf51bba25ed336fc079b9ac99c3e618b1f458"} Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.997454 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-nvs4h" Dec 11 14:15:08 crc kubenswrapper[4924]: I1211 14:15:08.997466 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="964b56f682c848f092928f13e49bf51bba25ed336fc079b9ac99c3e618b1f458" Dec 11 14:15:09 crc kubenswrapper[4924]: E1211 14:15:09.000572 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh_service-telemetry(533c2af6-100b-49d2-b06f-8fd5c6754220)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" podUID="533c2af6-100b-49d2-b06f-8fd5c6754220" Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.011148 4924 generic.go:334] "Generic (PLEG): container finished" podID="8f5e3b53-98c3-40f8-8376-d0f308b68f7b" containerID="5c9ca2e79558716d5db423556d65349b73f770bd3b3531b30dbf55a189d42851" exitCode=0 Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.011478 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerDied","Data":"5c9ca2e79558716d5db423556d65349b73f770bd3b3531b30dbf55a189d42851"} Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.012847 4924 scope.go:117] "RemoveContainer" containerID="5c9ca2e79558716d5db423556d65349b73f770bd3b3531b30dbf55a189d42851" Dec 11 14:15:09 crc kubenswrapper[4924]: E1211 14:15:09.013589 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl_service-telemetry(8f5e3b53-98c3-40f8-8376-d0f308b68f7b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" podUID="8f5e3b53-98c3-40f8-8376-d0f308b68f7b" Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.025007 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerStarted","Data":"47f1b66923dcb3a4ace76dafc3fa15273517ecd760dfce41d4038ad1250e2adc"} Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.030311 4924 generic.go:334] "Generic (PLEG): container finished" podID="63fa29c1-05fc-4660-a6d6-8c59e7fa0a63" containerID="4d3e1c27f759e7a5f35b8bb9de24808375ffb18bd5f2d44163e786f244e27fbd" exitCode=0 Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.030419 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerDied","Data":"4d3e1c27f759e7a5f35b8bb9de24808375ffb18bd5f2d44163e786f244e27fbd"} Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.031572 4924 scope.go:117] "RemoveContainer" containerID="4d3e1c27f759e7a5f35b8bb9de24808375ffb18bd5f2d44163e786f244e27fbd" Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.032311 4924 scope.go:117] "RemoveContainer" containerID="f3fd0eeba7173ae17f623709368c9442af0f0414a5439caae10e4de48bb30d0e" Dec 11 14:15:09 crc kubenswrapper[4924]: E1211 14:15:09.033677 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt_service-telemetry(63fa29c1-05fc-4660-a6d6-8c59e7fa0a63)\", failed to \"StartContainer\" for \"sg-core\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/sg-core:latest\\\"\"]" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" podUID="63fa29c1-05fc-4660-a6d6-8c59e7fa0a63" Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.062604 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" podStartSLOduration=8.741652771 podStartE2EDuration="43.062584537s" podCreationTimestamp="2025-12-11 14:14:26 +0000 UTC" firstStartedPulling="2025-12-11 14:14:33.549537108 +0000 UTC m=+1287.059018085" lastFinishedPulling="2025-12-11 14:15:07.870468874 +0000 UTC m=+1321.379949851" observedRunningTime="2025-12-11 14:15:09.062159604 +0000 UTC m=+1322.571640581" watchObservedRunningTime="2025-12-11 14:15:09.062584537 +0000 UTC m=+1322.572065514" Dec 11 14:15:09 crc kubenswrapper[4924]: I1211 14:15:09.092643 4924 scope.go:117] "RemoveContainer" containerID="ba00b9a9226b599d9040118b90703e81eaeaf85e8614ea47b8ec565d0fdea74a" Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.041034 4924 generic.go:334] "Generic (PLEG): container finished" podID="745d94ca-bcf1-48fc-b39b-0dbb960de581" containerID="72155f960c1309c567f35b6328d0647626915f9d0eded09f0eb639f7846d619a" exitCode=0 Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.041105 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerDied","Data":"72155f960c1309c567f35b6328d0647626915f9d0eded09f0eb639f7846d619a"} Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.041136 4924 scope.go:117] "RemoveContainer" containerID="81f18239f0f19324b394ef83ca02753436ed9e449cd3e0a81236c78eaf4ff979" Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.041843 4924 scope.go:117] "RemoveContainer" containerID="72155f960c1309c567f35b6328d0647626915f9d0eded09f0eb639f7846d619a" Dec 11 14:15:10 crc kubenswrapper[4924]: E1211 14:15:10.042229 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s_service-telemetry(745d94ca-bcf1-48fc-b39b-0dbb960de581)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" podUID="745d94ca-bcf1-48fc-b39b-0dbb960de581" Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.045614 4924 scope.go:117] "RemoveContainer" containerID="5c9ca2e79558716d5db423556d65349b73f770bd3b3531b30dbf55a189d42851" Dec 11 14:15:10 crc kubenswrapper[4924]: E1211 14:15:10.045831 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl_service-telemetry(8f5e3b53-98c3-40f8-8376-d0f308b68f7b)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" podUID="8f5e3b53-98c3-40f8-8376-d0f308b68f7b" Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.048442 4924 generic.go:334] "Generic (PLEG): container finished" podID="190d48f1-a870-406d-9bbc-b831a22ac215" containerID="47f1b66923dcb3a4ace76dafc3fa15273517ecd760dfce41d4038ad1250e2adc" exitCode=0 Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.048543 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerDied","Data":"47f1b66923dcb3a4ace76dafc3fa15273517ecd760dfce41d4038ad1250e2adc"} Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.051303 4924 scope.go:117] "RemoveContainer" containerID="47f1b66923dcb3a4ace76dafc3fa15273517ecd760dfce41d4038ad1250e2adc" Dec 11 14:15:10 crc kubenswrapper[4924]: E1211 14:15:10.051593 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z_service-telemetry(190d48f1-a870-406d-9bbc-b831a22ac215)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" podUID="190d48f1-a870-406d-9bbc-b831a22ac215" Dec 11 14:15:10 crc kubenswrapper[4924]: I1211 14:15:10.092870 4924 scope.go:117] "RemoveContainer" containerID="932146805b23fd538d1dbb8718bb463853c292927c2c1f911d7002f1be1124fb" Dec 11 14:15:11 crc kubenswrapper[4924]: I1211 14:15:11.060673 4924 scope.go:117] "RemoveContainer" containerID="47f1b66923dcb3a4ace76dafc3fa15273517ecd760dfce41d4038ad1250e2adc" Dec 11 14:15:11 crc kubenswrapper[4924]: E1211 14:15:11.061177 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z_service-telemetry(190d48f1-a870-406d-9bbc-b831a22ac215)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" podUID="190d48f1-a870-406d-9bbc-b831a22ac215" Dec 11 14:15:15 crc kubenswrapper[4924]: I1211 14:15:15.433601 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:15:15 crc kubenswrapper[4924]: I1211 14:15:15.434167 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:15:21 crc kubenswrapper[4924]: I1211 14:15:21.783615 4924 scope.go:117] "RemoveContainer" containerID="5c9ca2e79558716d5db423556d65349b73f770bd3b3531b30dbf55a189d42851" Dec 11 14:15:21 crc kubenswrapper[4924]: I1211 14:15:21.784759 4924 scope.go:117] "RemoveContainer" containerID="4d3e1c27f759e7a5f35b8bb9de24808375ffb18bd5f2d44163e786f244e27fbd" Dec 11 14:15:22 crc kubenswrapper[4924]: I1211 14:15:22.783361 4924 scope.go:117] "RemoveContainer" containerID="47f1b66923dcb3a4ace76dafc3fa15273517ecd760dfce41d4038ad1250e2adc" Dec 11 14:15:22 crc kubenswrapper[4924]: I1211 14:15:22.783422 4924 scope.go:117] "RemoveContainer" containerID="135c047ba1fa8cf37a7ead20d48ad6a613ac0079196606252364c9e25a679062" Dec 11 14:15:23 crc kubenswrapper[4924]: I1211 14:15:23.785134 4924 scope.go:117] "RemoveContainer" containerID="72155f960c1309c567f35b6328d0647626915f9d0eded09f0eb639f7846d619a" Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.266227 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerStarted","Data":"2dab662028a9a83046c42ae391206eb995a32c5c3c244d19ea6ad33b221ef3dc"} Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.268774 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s" event={"ID":"745d94ca-bcf1-48fc-b39b-0dbb960de581","Type":"ContainerStarted","Data":"14b1c8370ab8fa34ba1c35197e88fda416023d6c870a4e4fa2b67b4e46328d3b"} Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.272081 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl" event={"ID":"8f5e3b53-98c3-40f8-8376-d0f308b68f7b","Type":"ContainerStarted","Data":"230445611e910a78e7583e7b07ba71cc9cf061ea8164c37e6690082f6bf08fe2"} Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.274918 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z" event={"ID":"190d48f1-a870-406d-9bbc-b831a22ac215","Type":"ContainerStarted","Data":"8ddbe58571e689ebfb63886605f7ec79a390641d5bf9de1fb516beb6353fc2a4"} Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.278110 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerStarted","Data":"aa99c0b2f423a9786fe9703d4002c1a424a4f0ed680b4700d5fb3d8922c23bd6"} Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.278198 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" event={"ID":"63fa29c1-05fc-4660-a6d6-8c59e7fa0a63","Type":"ContainerStarted","Data":"4e7d424029b894e3e96bc8466c53c3339cc5682ff1fb8c5c93809665cca27f77"} Dec 11 14:15:25 crc kubenswrapper[4924]: I1211 14:15:25.325117 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt" podStartSLOduration=7.479570717 podStartE2EDuration="45.325097459s" podCreationTimestamp="2025-12-11 14:14:40 +0000 UTC" firstStartedPulling="2025-12-11 14:14:46.903226398 +0000 UTC m=+1300.412707365" lastFinishedPulling="2025-12-11 14:15:24.74875313 +0000 UTC m=+1338.258234107" observedRunningTime="2025-12-11 14:15:25.311398864 +0000 UTC m=+1338.820879871" watchObservedRunningTime="2025-12-11 14:15:25.325097459 +0000 UTC m=+1338.834578446" Dec 11 14:15:26 crc kubenswrapper[4924]: I1211 14:15:26.287291 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" event={"ID":"533c2af6-100b-49d2-b06f-8fd5c6754220","Type":"ContainerStarted","Data":"414b443cb6aa376ca7442e0b877a1a41eb8195ac25aa9e26b19964ebee8a3d3b"} Dec 11 14:15:26 crc kubenswrapper[4924]: I1211 14:15:26.306896 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh" podStartSLOduration=8.464432811 podStartE2EDuration="47.306873337s" podCreationTimestamp="2025-12-11 14:14:39 +0000 UTC" firstStartedPulling="2025-12-11 14:14:46.903229928 +0000 UTC m=+1300.412710915" lastFinishedPulling="2025-12-11 14:15:25.745670464 +0000 UTC m=+1339.255151441" observedRunningTime="2025-12-11 14:15:26.302309276 +0000 UTC m=+1339.811790253" watchObservedRunningTime="2025-12-11 14:15:26.306873337 +0000 UTC m=+1339.816354324" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.778381 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 11 14:15:37 crc kubenswrapper[4924]: E1211 14:15:37.779214 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" containerName="collect-profiles" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.779232 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" containerName="collect-profiles" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.779456 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeddf69f-0c5b-4516-b6cb-0d26f32d14c6" containerName="collect-profiles" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.780077 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.786871 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.884402 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/f0ce9380-943d-420b-a359-d83b576b9272-qdr-test-config\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.884475 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnsw\" (UniqueName: \"kubernetes.io/projected/f0ce9380-943d-420b-a359-d83b576b9272-kube-api-access-sxnsw\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.884537 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/f0ce9380-943d-420b-a359-d83b576b9272-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.887252 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.887626 4924 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.985782 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/f0ce9380-943d-420b-a359-d83b576b9272-qdr-test-config\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.986148 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnsw\" (UniqueName: \"kubernetes.io/projected/f0ce9380-943d-420b-a359-d83b576b9272-kube-api-access-sxnsw\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.986522 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/f0ce9380-943d-420b-a359-d83b576b9272-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.986881 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/f0ce9380-943d-420b-a359-d83b576b9272-qdr-test-config\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:37 crc kubenswrapper[4924]: I1211 14:15:37.993025 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/f0ce9380-943d-420b-a359-d83b576b9272-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:38 crc kubenswrapper[4924]: I1211 14:15:38.009096 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnsw\" (UniqueName: \"kubernetes.io/projected/f0ce9380-943d-420b-a359-d83b576b9272-kube-api-access-sxnsw\") pod \"qdr-test\" (UID: \"f0ce9380-943d-420b-a359-d83b576b9272\") " pod="service-telemetry/qdr-test" Dec 11 14:15:38 crc kubenswrapper[4924]: I1211 14:15:38.200112 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 11 14:15:38 crc kubenswrapper[4924]: I1211 14:15:38.596200 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 11 14:15:39 crc kubenswrapper[4924]: I1211 14:15:39.377023 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"f0ce9380-943d-420b-a359-d83b576b9272","Type":"ContainerStarted","Data":"04a0b6a8bde92b662be99fc9958ca48bfe0cdd0d819197be93fcdcc48ac64703"} Dec 11 14:15:45 crc kubenswrapper[4924]: I1211 14:15:45.433208 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:15:45 crc kubenswrapper[4924]: I1211 14:15:45.433799 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.431805 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"f0ce9380-943d-420b-a359-d83b576b9272","Type":"ContainerStarted","Data":"c9cc45596ab932029a3cb2c3d76b0d1440bfb3bac2431a938ef5a30571802966"} Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.453617 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.572593444 podStartE2EDuration="10.453595011s" podCreationTimestamp="2025-12-11 14:15:37 +0000 UTC" firstStartedPulling="2025-12-11 14:15:38.599425073 +0000 UTC m=+1352.108906050" lastFinishedPulling="2025-12-11 14:15:46.48042664 +0000 UTC m=+1359.989907617" observedRunningTime="2025-12-11 14:15:47.446717363 +0000 UTC m=+1360.956198340" watchObservedRunningTime="2025-12-11 14:15:47.453595011 +0000 UTC m=+1360.963075988" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.734753 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z85gp"] Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.736948 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.740132 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.740295 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.740429 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.740575 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.742384 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.742493 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.745692 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z85gp"] Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944234 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-sensubility-config\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944455 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944553 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944590 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/165670d8-6a83-4fc1-b7f2-665d218de570-kube-api-access-cgxdj\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944622 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-healthcheck-log\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944740 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:47 crc kubenswrapper[4924]: I1211 14:15:47.944782 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-config\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.046745 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.046829 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.046861 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/165670d8-6a83-4fc1-b7f2-665d218de570-kube-api-access-cgxdj\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.046886 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-healthcheck-log\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.046924 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.046947 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-config\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.047007 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-sensubility-config\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.048751 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.048784 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.048751 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.049360 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-sensubility-config\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.049401 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-config\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.049680 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-healthcheck-log\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.071807 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/165670d8-6a83-4fc1-b7f2-665d218de570-kube-api-access-cgxdj\") pod \"stf-smoketest-smoke1-z85gp\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.140810 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.142056 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.146477 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.250508 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndj6v\" (UniqueName: \"kubernetes.io/projected/4a822bd3-d648-4e87-a903-2e7bf433fe18-kube-api-access-ndj6v\") pod \"curl\" (UID: \"4a822bd3-d648-4e87-a903-2e7bf433fe18\") " pod="service-telemetry/curl" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.352368 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndj6v\" (UniqueName: \"kubernetes.io/projected/4a822bd3-d648-4e87-a903-2e7bf433fe18-kube-api-access-ndj6v\") pod \"curl\" (UID: \"4a822bd3-d648-4e87-a903-2e7bf433fe18\") " pod="service-telemetry/curl" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.355961 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.369883 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndj6v\" (UniqueName: \"kubernetes.io/projected/4a822bd3-d648-4e87-a903-2e7bf433fe18-kube-api-access-ndj6v\") pod \"curl\" (UID: \"4a822bd3-d648-4e87-a903-2e7bf433fe18\") " pod="service-telemetry/curl" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.509599 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.717001 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 11 14:15:48 crc kubenswrapper[4924]: W1211 14:15:48.719524 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a822bd3_d648_4e87_a903_2e7bf433fe18.slice/crio-c884fa5f5908b5fe8bbbeec8102c36ed26f838720e799eb4dabb731d5c93d10b WatchSource:0}: Error finding container c884fa5f5908b5fe8bbbeec8102c36ed26f838720e799eb4dabb731d5c93d10b: Status 404 returned error can't find the container with id c884fa5f5908b5fe8bbbeec8102c36ed26f838720e799eb4dabb731d5c93d10b Dec 11 14:15:48 crc kubenswrapper[4924]: I1211 14:15:48.872773 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z85gp"] Dec 11 14:15:48 crc kubenswrapper[4924]: W1211 14:15:48.875476 4924 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod165670d8_6a83_4fc1_b7f2_665d218de570.slice/crio-e3555a94a3463b0b0f738e81d534ced097aef150ffb9e42f4c5d62dc1935df3a WatchSource:0}: Error finding container e3555a94a3463b0b0f738e81d534ced097aef150ffb9e42f4c5d62dc1935df3a: Status 404 returned error can't find the container with id e3555a94a3463b0b0f738e81d534ced097aef150ffb9e42f4c5d62dc1935df3a Dec 11 14:15:49 crc kubenswrapper[4924]: I1211 14:15:49.530248 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z85gp" event={"ID":"165670d8-6a83-4fc1-b7f2-665d218de570","Type":"ContainerStarted","Data":"e3555a94a3463b0b0f738e81d534ced097aef150ffb9e42f4c5d62dc1935df3a"} Dec 11 14:15:49 crc kubenswrapper[4924]: I1211 14:15:49.531764 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"4a822bd3-d648-4e87-a903-2e7bf433fe18","Type":"ContainerStarted","Data":"c884fa5f5908b5fe8bbbeec8102c36ed26f838720e799eb4dabb731d5c93d10b"} Dec 11 14:15:53 crc kubenswrapper[4924]: I1211 14:15:53.572201 4924 generic.go:334] "Generic (PLEG): container finished" podID="4a822bd3-d648-4e87-a903-2e7bf433fe18" containerID="deb49e544d5d7591b595368a6e7f504ab9b0a0ab0b3268e019950e3a33adff88" exitCode=0 Dec 11 14:15:53 crc kubenswrapper[4924]: I1211 14:15:53.572300 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"4a822bd3-d648-4e87-a903-2e7bf433fe18","Type":"ContainerDied","Data":"deb49e544d5d7591b595368a6e7f504ab9b0a0ab0b3268e019950e3a33adff88"} Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.461108 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.604384 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndj6v\" (UniqueName: \"kubernetes.io/projected/4a822bd3-d648-4e87-a903-2e7bf433fe18-kube-api-access-ndj6v\") pod \"4a822bd3-d648-4e87-a903-2e7bf433fe18\" (UID: \"4a822bd3-d648-4e87-a903-2e7bf433fe18\") " Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.612428 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a822bd3-d648-4e87-a903-2e7bf433fe18-kube-api-access-ndj6v" (OuterVolumeSpecName: "kube-api-access-ndj6v") pod "4a822bd3-d648-4e87-a903-2e7bf433fe18" (UID: "4a822bd3-d648-4e87-a903-2e7bf433fe18"). InnerVolumeSpecName "kube-api-access-ndj6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.624157 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_4a822bd3-d648-4e87-a903-2e7bf433fe18/curl/0.log" Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.671861 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"4a822bd3-d648-4e87-a903-2e7bf433fe18","Type":"ContainerDied","Data":"c884fa5f5908b5fe8bbbeec8102c36ed26f838720e799eb4dabb731d5c93d10b"} Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.671902 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c884fa5f5908b5fe8bbbeec8102c36ed26f838720e799eb4dabb731d5c93d10b" Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.671932 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.706398 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndj6v\" (UniqueName: \"kubernetes.io/projected/4a822bd3-d648-4e87-a903-2e7bf433fe18-kube-api-access-ndj6v\") on node \"crc\" DevicePath \"\"" Dec 11 14:16:04 crc kubenswrapper[4924]: I1211 14:16:04.876709 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-9nvf4_4d1f4a5b-ce7c-4386-ba50-c36ff3de3686/prometheus-webhook-snmp/0.log" Dec 11 14:16:05 crc kubenswrapper[4924]: E1211 14:16:05.163647 4924 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-collectd:current-tripleo" Dec 11 14:16:05 crc kubenswrapper[4924]: E1211 14:16:05.163806 4924 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-collectd,Image:quay.io/tripleomastercentos9/openstack-collectd:current-tripleo,Command:[/smoketest_collectd_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:YysCvK0vlF8VdMY4tYnNEqSa,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc2NTQ2NjEzMywiaWF0IjoxNzY1NDYyNTMzLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIzM2ZhNGNmYy0wZTk2LTRmZjEtYmI2ZC0zNGIxMjVkMGM5MWMiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6ImQ3OTE4YmVhLTE3NjItNGJhMS1iNTMzLTczY2ZkMGU5ZmQ4MyJ9fSwibmJmIjoxNzY1NDYyNTMzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.DSkzK9uEbqtZ3Bi1vnA3S_Mok86P5tuCcDCu1JtWcV116wsVbkZFRKlDjAQOObenUzU6-C7itkenxjmIELB19aQBpZb32HlcbSzV0HnFaVbI6t40uWtvjsq953Nrc9Cine7WdT0tWbXFO3QfZaiovOx5y1jabxknJSwaB66v5eVRuKOnVhc4ohvRfgbBSpJJWRVDuvz3zlDI3SgewG78LnmBjEwa3LtMcA9IXkBAmXPEBFc3zz8nLCghpVmj3ul6Rde1pXltSI7_GXAGIJ8nKjZsG3oauLIhB--TC8BhCpdBCB_3zYyjrIOhj69kAmiviVY88WBA92rVzR021m3PvUqv1S07zbvxFaAuOTprrHkBHwYkSThplxw6sqmdp3ZwQHXzDA1kRc8fbhHRmDG7rnxHk6A43iFH86WTPfrYZaD_GpeebsomCzDyw83sVl5nRz_zfwF5CLXaH7rtuKEqMC7Muk3rw5EKsagmaikQ8KO_xaMt3kS7VNM0oqxHe_8p3awkGwGkTVTih0JfBBkA6-E1GJA_15-tbgiv0soj3MgunofSFX5ujrf6P3lNSMpKlBckIkcCQlvHtFO8cLBZYzt-fY4z5Dmp4KVj2MOpRhQtrnWSvyMdjLmFrXZyVxOfI10LmnOoyOA8hM2xVzjOI8SwQAP4WUW1ZnuBN_H5VUI,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:collectd-config,ReadOnly:false,MountPath:/etc/minimal-collectd.conf.template,SubPath:minimal-collectd.conf.template,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sensubility-config,ReadOnly:false,MountPath:/etc/collectd-sensubility.conf,SubPath:collectd-sensubility.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:healthcheck-log,ReadOnly:false,MountPath:/healthcheck.log,SubPath:healthcheck.log,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:collectd-entrypoint-script,ReadOnly:false,MountPath:/smoketest_collectd_entrypoint.sh,SubPath:smoketest_collectd_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cgxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-z85gp_service-telemetry(165670d8-6a83-4fc1-b7f2-665d218de570): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:16:13 crc kubenswrapper[4924]: E1211 14:16:13.647959 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/stf-smoketest-smoke1-z85gp" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" Dec 11 14:16:13 crc kubenswrapper[4924]: I1211 14:16:13.733394 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z85gp" event={"ID":"165670d8-6a83-4fc1-b7f2-665d218de570","Type":"ContainerStarted","Data":"b2324974dd05b6ee1e7243426a1f3ae5c0bd35c89933feffe197f58a9e0bae4b"} Dec 11 14:16:13 crc kubenswrapper[4924]: E1211 14:16:13.735273 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-collectd:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-z85gp" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" Dec 11 14:16:14 crc kubenswrapper[4924]: E1211 14:16:14.740967 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-collectd:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-z85gp" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.434446 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.434527 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.434583 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.435359 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5861f804b3b60124505c75e8ca85aada7f1b2041baaf2261006aa9edcedeb752"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.435438 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://5861f804b3b60124505c75e8ca85aada7f1b2041baaf2261006aa9edcedeb752" gracePeriod=600 Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.748539 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="5861f804b3b60124505c75e8ca85aada7f1b2041baaf2261006aa9edcedeb752" exitCode=0 Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.749604 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"5861f804b3b60124505c75e8ca85aada7f1b2041baaf2261006aa9edcedeb752"} Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.749782 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerStarted","Data":"bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4"} Dec 11 14:16:15 crc kubenswrapper[4924]: I1211 14:16:15.749833 4924 scope.go:117] "RemoveContainer" containerID="93e4fd4fa7a0ea185c1b0a02c76e4148f87fb1524a936a5e232d6e0e38f7bfdc" Dec 11 14:16:27 crc kubenswrapper[4924]: I1211 14:16:27.878760 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z85gp" event={"ID":"165670d8-6a83-4fc1-b7f2-665d218de570","Type":"ContainerStarted","Data":"3eda26bfe34602deb85bcdd1c34502fffbe9f5821e23492b841b50075d787d67"} Dec 11 14:16:27 crc kubenswrapper[4924]: I1211 14:16:27.907389 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-z85gp" podStartSLOduration=2.368288881 podStartE2EDuration="40.907353398s" podCreationTimestamp="2025-12-11 14:15:47 +0000 UTC" firstStartedPulling="2025-12-11 14:15:48.878745061 +0000 UTC m=+1362.388226038" lastFinishedPulling="2025-12-11 14:16:27.417809578 +0000 UTC m=+1400.927290555" observedRunningTime="2025-12-11 14:16:27.901189561 +0000 UTC m=+1401.410670538" watchObservedRunningTime="2025-12-11 14:16:27.907353398 +0000 UTC m=+1401.416834365" Dec 11 14:16:35 crc kubenswrapper[4924]: I1211 14:16:35.015049 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-9nvf4_4d1f4a5b-ce7c-4386-ba50-c36ff3de3686/prometheus-webhook-snmp/0.log" Dec 11 14:16:46 crc kubenswrapper[4924]: I1211 14:16:46.017736 4924 generic.go:334] "Generic (PLEG): container finished" podID="165670d8-6a83-4fc1-b7f2-665d218de570" containerID="b2324974dd05b6ee1e7243426a1f3ae5c0bd35c89933feffe197f58a9e0bae4b" exitCode=0 Dec 11 14:16:46 crc kubenswrapper[4924]: I1211 14:16:46.017832 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z85gp" event={"ID":"165670d8-6a83-4fc1-b7f2-665d218de570","Type":"ContainerDied","Data":"b2324974dd05b6ee1e7243426a1f3ae5c0bd35c89933feffe197f58a9e0bae4b"} Dec 11 14:16:46 crc kubenswrapper[4924]: I1211 14:16:46.018990 4924 scope.go:117] "RemoveContainer" containerID="b2324974dd05b6ee1e7243426a1f3ae5c0bd35c89933feffe197f58a9e0bae4b" Dec 11 14:17:01 crc kubenswrapper[4924]: I1211 14:17:01.140548 4924 generic.go:334] "Generic (PLEG): container finished" podID="165670d8-6a83-4fc1-b7f2-665d218de570" containerID="3eda26bfe34602deb85bcdd1c34502fffbe9f5821e23492b841b50075d787d67" exitCode=0 Dec 11 14:17:01 crc kubenswrapper[4924]: I1211 14:17:01.140623 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z85gp" event={"ID":"165670d8-6a83-4fc1-b7f2-665d218de570","Type":"ContainerDied","Data":"3eda26bfe34602deb85bcdd1c34502fffbe9f5821e23492b841b50075d787d67"} Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.387235 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.415214 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/165670d8-6a83-4fc1-b7f2-665d218de570-kube-api-access-cgxdj\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.415287 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-config\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.423501 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/165670d8-6a83-4fc1-b7f2-665d218de570-kube-api-access-cgxdj" (OuterVolumeSpecName: "kube-api-access-cgxdj") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "kube-api-access-cgxdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.433128 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.516418 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-publisher\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.516475 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-entrypoint-script\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.516531 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-entrypoint-script\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.516566 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-sensubility-config\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.516618 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-healthcheck-log\") pod \"165670d8-6a83-4fc1-b7f2-665d218de570\" (UID: \"165670d8-6a83-4fc1-b7f2-665d218de570\") " Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.517006 4924 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.517028 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgxdj\" (UniqueName: \"kubernetes.io/projected/165670d8-6a83-4fc1-b7f2-665d218de570-kube-api-access-cgxdj\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.535218 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.537460 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.543749 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.551878 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.552538 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "165670d8-6a83-4fc1-b7f2-665d218de570" (UID: "165670d8-6a83-4fc1-b7f2-665d218de570"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.617836 4924 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.617885 4924 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.617895 4924 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.617905 4924 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:02 crc kubenswrapper[4924]: I1211 14:17:02.617914 4924 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/165670d8-6a83-4fc1-b7f2-665d218de570-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 11 14:17:03 crc kubenswrapper[4924]: I1211 14:17:03.158904 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z85gp" event={"ID":"165670d8-6a83-4fc1-b7f2-665d218de570","Type":"ContainerDied","Data":"e3555a94a3463b0b0f738e81d534ced097aef150ffb9e42f4c5d62dc1935df3a"} Dec 11 14:17:03 crc kubenswrapper[4924]: I1211 14:17:03.158960 4924 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3555a94a3463b0b0f738e81d534ced097aef150ffb9e42f4c5d62dc1935df3a" Dec 11 14:17:03 crc kubenswrapper[4924]: I1211 14:17:03.158974 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z85gp" Dec 11 14:17:04 crc kubenswrapper[4924]: I1211 14:17:04.311609 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-z85gp_165670d8-6a83-4fc1-b7f2-665d218de570/smoketest-collectd/0.log" Dec 11 14:17:04 crc kubenswrapper[4924]: I1211 14:17:04.586585 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-z85gp_165670d8-6a83-4fc1-b7f2-665d218de570/smoketest-ceilometer/0.log" Dec 11 14:17:04 crc kubenswrapper[4924]: I1211 14:17:04.844122 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-qmgqf_13310010-42ff-4473-b29e-413053a6a8f8/default-interconnect/0.log" Dec 11 14:17:05 crc kubenswrapper[4924]: I1211 14:17:05.076634 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl_8f5e3b53-98c3-40f8-8376-d0f308b68f7b/bridge/2.log" Dec 11 14:17:05 crc kubenswrapper[4924]: I1211 14:17:05.334188 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7996dc9458-57pxl_8f5e3b53-98c3-40f8-8376-d0f308b68f7b/sg-core/0.log" Dec 11 14:17:05 crc kubenswrapper[4924]: I1211 14:17:05.592502 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh_533c2af6-100b-49d2-b06f-8fd5c6754220/bridge/2.log" Dec 11 14:17:06 crc kubenswrapper[4924]: I1211 14:17:06.340222 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-65db5bcd5f-8b8xh_533c2af6-100b-49d2-b06f-8fd5c6754220/sg-core/0.log" Dec 11 14:17:06 crc kubenswrapper[4924]: I1211 14:17:06.594931 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z_190d48f1-a870-406d-9bbc-b831a22ac215/bridge/2.log" Dec 11 14:17:06 crc kubenswrapper[4924]: I1211 14:17:06.834837 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-b57f974ff-mwm4z_190d48f1-a870-406d-9bbc-b831a22ac215/sg-core/0.log" Dec 11 14:17:07 crc kubenswrapper[4924]: I1211 14:17:07.070717 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt_63fa29c1-05fc-4660-a6d6-8c59e7fa0a63/bridge/2.log" Dec 11 14:17:07 crc kubenswrapper[4924]: I1211 14:17:07.321175 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-74554ff6fd-l7mmt_63fa29c1-05fc-4660-a6d6-8c59e7fa0a63/sg-core/0.log" Dec 11 14:17:07 crc kubenswrapper[4924]: I1211 14:17:07.576060 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s_745d94ca-bcf1-48fc-b39b-0dbb960de581/bridge/2.log" Dec 11 14:17:07 crc kubenswrapper[4924]: I1211 14:17:07.821355 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-6864f4fb65-t248s_745d94ca-bcf1-48fc-b39b-0dbb960de581/sg-core/0.log" Dec 11 14:17:10 crc kubenswrapper[4924]: I1211 14:17:10.405373 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6467cb9984-k9ft9_7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff/operator/0.log" Dec 11 14:17:10 crc kubenswrapper[4924]: I1211 14:17:10.655989 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_b503f5e8-7bbc-48a3-aed9-83ebaebbab33/prometheus/0.log" Dec 11 14:17:10 crc kubenswrapper[4924]: I1211 14:17:10.900660 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_c00b0adf-9f8a-44cd-9ca7-381849854fdc/elasticsearch/0.log" Dec 11 14:17:11 crc kubenswrapper[4924]: I1211 14:17:11.132040 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-78bcbbdcff-9nvf4_4d1f4a5b-ce7c-4386-ba50-c36ff3de3686/prometheus-webhook-snmp/0.log" Dec 11 14:17:11 crc kubenswrapper[4924]: I1211 14:17:11.401039 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_49243203-c21f-40a7-b19b-da1632d7ede1/alertmanager/0.log" Dec 11 14:17:25 crc kubenswrapper[4924]: I1211 14:17:25.477724 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-75bb6547f5-rjpvz_5ea2b316-8ef3-4af3-9c0f-92064f0934d2/operator/0.log" Dec 11 14:17:28 crc kubenswrapper[4924]: I1211 14:17:28.011022 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-6467cb9984-k9ft9_7740a3d7-bc5a-4c22-9437-2aeac4fbf4ff/operator/0.log" Dec 11 14:17:28 crc kubenswrapper[4924]: I1211 14:17:28.357739 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_f0ce9380-943d-420b-a359-d83b576b9272/qdr/0.log" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.888449 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-rtpcv"] Dec 11 14:17:56 crc kubenswrapper[4924]: E1211 14:17:56.889301 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a822bd3-d648-4e87-a903-2e7bf433fe18" containerName="curl" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.889319 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a822bd3-d648-4e87-a903-2e7bf433fe18" containerName="curl" Dec 11 14:17:56 crc kubenswrapper[4924]: E1211 14:17:56.889377 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" containerName="smoketest-collectd" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.889388 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" containerName="smoketest-collectd" Dec 11 14:17:56 crc kubenswrapper[4924]: E1211 14:17:56.889399 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" containerName="smoketest-ceilometer" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.889408 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" containerName="smoketest-ceilometer" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.889553 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" containerName="smoketest-collectd" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.889569 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a822bd3-d648-4e87-a903-2e7bf433fe18" containerName="curl" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.889584 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="165670d8-6a83-4fc1-b7f2-665d218de570" containerName="smoketest-ceilometer" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.890134 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.900501 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rtpcv"] Dec 11 14:17:56 crc kubenswrapper[4924]: I1211 14:17:56.972621 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p58d\" (UniqueName: \"kubernetes.io/projected/1222ec9c-52e5-475b-ad62-77060e1d8e88-kube-api-access-8p58d\") pod \"infrawatch-operators-rtpcv\" (UID: \"1222ec9c-52e5-475b-ad62-77060e1d8e88\") " pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:17:57 crc kubenswrapper[4924]: I1211 14:17:57.073742 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p58d\" (UniqueName: \"kubernetes.io/projected/1222ec9c-52e5-475b-ad62-77060e1d8e88-kube-api-access-8p58d\") pod \"infrawatch-operators-rtpcv\" (UID: \"1222ec9c-52e5-475b-ad62-77060e1d8e88\") " pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:17:57 crc kubenswrapper[4924]: I1211 14:17:57.094629 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p58d\" (UniqueName: \"kubernetes.io/projected/1222ec9c-52e5-475b-ad62-77060e1d8e88-kube-api-access-8p58d\") pod \"infrawatch-operators-rtpcv\" (UID: \"1222ec9c-52e5-475b-ad62-77060e1d8e88\") " pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:17:57 crc kubenswrapper[4924]: I1211 14:17:57.209026 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:17:57 crc kubenswrapper[4924]: I1211 14:17:57.463271 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rtpcv"] Dec 11 14:17:57 crc kubenswrapper[4924]: I1211 14:17:57.513794 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rtpcv" event={"ID":"1222ec9c-52e5-475b-ad62-77060e1d8e88","Type":"ContainerStarted","Data":"ade9b46d7c16a3da07ffbb576c3e1845fa692a4fa56ccf43c8b00257ad18333a"} Dec 11 14:17:59 crc kubenswrapper[4924]: I1211 14:17:59.530475 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rtpcv" event={"ID":"1222ec9c-52e5-475b-ad62-77060e1d8e88","Type":"ContainerStarted","Data":"da53bb72f87fb1f12dc072789b786c4845a6c8cb3cd04c475e8e87db2c58d207"} Dec 11 14:17:59 crc kubenswrapper[4924]: I1211 14:17:59.546190 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-rtpcv" podStartSLOduration=2.67768902 podStartE2EDuration="3.546172621s" podCreationTimestamp="2025-12-11 14:17:56 +0000 UTC" firstStartedPulling="2025-12-11 14:17:57.474869472 +0000 UTC m=+1490.984350449" lastFinishedPulling="2025-12-11 14:17:58.343353063 +0000 UTC m=+1491.852834050" observedRunningTime="2025-12-11 14:17:59.54517975 +0000 UTC m=+1493.054660727" watchObservedRunningTime="2025-12-11 14:17:59.546172621 +0000 UTC m=+1493.055653608" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.733157 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jcv7d/must-gather-k858g"] Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.735274 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.740824 4924 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jcv7d"/"default-dockercfg-4hv88" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.741020 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jcv7d"/"openshift-service-ca.crt" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.741092 4924 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jcv7d"/"kube-root-ca.crt" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.757728 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jcv7d/must-gather-k858g"] Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.786693 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-must-gather-output\") pod \"must-gather-k858g\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.786824 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwbkv\" (UniqueName: \"kubernetes.io/projected/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-kube-api-access-fwbkv\") pod \"must-gather-k858g\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.888051 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwbkv\" (UniqueName: \"kubernetes.io/projected/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-kube-api-access-fwbkv\") pod \"must-gather-k858g\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.888189 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-must-gather-output\") pod \"must-gather-k858g\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.888965 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-must-gather-output\") pod \"must-gather-k858g\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:04 crc kubenswrapper[4924]: I1211 14:18:04.913917 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwbkv\" (UniqueName: \"kubernetes.io/projected/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-kube-api-access-fwbkv\") pod \"must-gather-k858g\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:05 crc kubenswrapper[4924]: I1211 14:18:05.052300 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:18:05 crc kubenswrapper[4924]: I1211 14:18:05.310184 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jcv7d/must-gather-k858g"] Dec 11 14:18:05 crc kubenswrapper[4924]: I1211 14:18:05.568917 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jcv7d/must-gather-k858g" event={"ID":"417ce323-c2a5-4690-90ce-1b7b5a04dbc9","Type":"ContainerStarted","Data":"2f5605c6638a99f6cc8e61818b899ad82d651f8189b3636b41e950edeffa9cd6"} Dec 11 14:18:07 crc kubenswrapper[4924]: I1211 14:18:07.219929 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:18:07 crc kubenswrapper[4924]: I1211 14:18:07.220786 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:18:07 crc kubenswrapper[4924]: I1211 14:18:07.255594 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:18:07 crc kubenswrapper[4924]: I1211 14:18:07.607539 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:18:08 crc kubenswrapper[4924]: I1211 14:18:08.491861 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-rtpcv"] Dec 11 14:18:09 crc kubenswrapper[4924]: I1211 14:18:09.596879 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-rtpcv" podUID="1222ec9c-52e5-475b-ad62-77060e1d8e88" containerName="registry-server" containerID="cri-o://da53bb72f87fb1f12dc072789b786c4845a6c8cb3cd04c475e8e87db2c58d207" gracePeriod=2 Dec 11 14:18:10 crc kubenswrapper[4924]: I1211 14:18:10.617000 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rtpcv" event={"ID":"1222ec9c-52e5-475b-ad62-77060e1d8e88","Type":"ContainerDied","Data":"da53bb72f87fb1f12dc072789b786c4845a6c8cb3cd04c475e8e87db2c58d207"} Dec 11 14:18:10 crc kubenswrapper[4924]: I1211 14:18:10.616821 4924 generic.go:334] "Generic (PLEG): container finished" podID="1222ec9c-52e5-475b-ad62-77060e1d8e88" containerID="da53bb72f87fb1f12dc072789b786c4845a6c8cb3cd04c475e8e87db2c58d207" exitCode=0 Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.065568 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.245164 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p58d\" (UniqueName: \"kubernetes.io/projected/1222ec9c-52e5-475b-ad62-77060e1d8e88-kube-api-access-8p58d\") pod \"1222ec9c-52e5-475b-ad62-77060e1d8e88\" (UID: \"1222ec9c-52e5-475b-ad62-77060e1d8e88\") " Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.262733 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1222ec9c-52e5-475b-ad62-77060e1d8e88-kube-api-access-8p58d" (OuterVolumeSpecName: "kube-api-access-8p58d") pod "1222ec9c-52e5-475b-ad62-77060e1d8e88" (UID: "1222ec9c-52e5-475b-ad62-77060e1d8e88"). InnerVolumeSpecName "kube-api-access-8p58d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.347319 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p58d\" (UniqueName: \"kubernetes.io/projected/1222ec9c-52e5-475b-ad62-77060e1d8e88-kube-api-access-8p58d\") on node \"crc\" DevicePath \"\"" Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.652788 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jcv7d/must-gather-k858g" event={"ID":"417ce323-c2a5-4690-90ce-1b7b5a04dbc9","Type":"ContainerStarted","Data":"0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8"} Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.655015 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rtpcv" event={"ID":"1222ec9c-52e5-475b-ad62-77060e1d8e88","Type":"ContainerDied","Data":"ade9b46d7c16a3da07ffbb576c3e1845fa692a4fa56ccf43c8b00257ad18333a"} Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.655073 4924 scope.go:117] "RemoveContainer" containerID="da53bb72f87fb1f12dc072789b786c4845a6c8cb3cd04c475e8e87db2c58d207" Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.655075 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rtpcv" Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.688378 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-rtpcv"] Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.694961 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-rtpcv"] Dec 11 14:18:14 crc kubenswrapper[4924]: I1211 14:18:14.791451 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1222ec9c-52e5-475b-ad62-77060e1d8e88" path="/var/lib/kubelet/pods/1222ec9c-52e5-475b-ad62-77060e1d8e88/volumes" Dec 11 14:18:15 crc kubenswrapper[4924]: I1211 14:18:15.433807 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:18:15 crc kubenswrapper[4924]: I1211 14:18:15.433869 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:18:15 crc kubenswrapper[4924]: I1211 14:18:15.664495 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jcv7d/must-gather-k858g" event={"ID":"417ce323-c2a5-4690-90ce-1b7b5a04dbc9","Type":"ContainerStarted","Data":"93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b"} Dec 11 14:18:15 crc kubenswrapper[4924]: I1211 14:18:15.677991 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jcv7d/must-gather-k858g" podStartSLOduration=2.919037587 podStartE2EDuration="11.677971185s" podCreationTimestamp="2025-12-11 14:18:04 +0000 UTC" firstStartedPulling="2025-12-11 14:18:05.32201564 +0000 UTC m=+1498.831496617" lastFinishedPulling="2025-12-11 14:18:14.080949238 +0000 UTC m=+1507.590430215" observedRunningTime="2025-12-11 14:18:15.676577413 +0000 UTC m=+1509.186058400" watchObservedRunningTime="2025-12-11 14:18:15.677971185 +0000 UTC m=+1509.187452152" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.525456 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kklvp"] Dec 11 14:18:26 crc kubenswrapper[4924]: E1211 14:18:26.525968 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1222ec9c-52e5-475b-ad62-77060e1d8e88" containerName="registry-server" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.525979 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="1222ec9c-52e5-475b-ad62-77060e1d8e88" containerName="registry-server" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.526088 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="1222ec9c-52e5-475b-ad62-77060e1d8e88" containerName="registry-server" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.527134 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.541111 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kklvp"] Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.547103 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-catalog-content\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.547245 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-utilities\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.547305 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6lq\" (UniqueName: \"kubernetes.io/projected/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-kube-api-access-vp6lq\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.648029 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-utilities\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.648088 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6lq\" (UniqueName: \"kubernetes.io/projected/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-kube-api-access-vp6lq\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.648130 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-catalog-content\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.648572 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-catalog-content\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.648653 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-utilities\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.671299 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6lq\" (UniqueName: \"kubernetes.io/projected/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-kube-api-access-vp6lq\") pod \"redhat-operators-kklvp\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:26 crc kubenswrapper[4924]: I1211 14:18:26.842125 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:27 crc kubenswrapper[4924]: I1211 14:18:27.100295 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kklvp"] Dec 11 14:18:27 crc kubenswrapper[4924]: E1211 14:18:27.404279 4924 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f93176b_8460_4dfb_85e1_dfc5a010cdb2.slice/crio-conmon-9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f93176b_8460_4dfb_85e1_dfc5a010cdb2.slice/crio-9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0.scope\": RecentStats: unable to find data in memory cache]" Dec 11 14:18:27 crc kubenswrapper[4924]: I1211 14:18:27.760264 4924 generic.go:334] "Generic (PLEG): container finished" podID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerID="9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0" exitCode=0 Dec 11 14:18:27 crc kubenswrapper[4924]: I1211 14:18:27.760320 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerDied","Data":"9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0"} Dec 11 14:18:27 crc kubenswrapper[4924]: I1211 14:18:27.760367 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerStarted","Data":"2fc4cdd1f15da9a2af2c496f73a8d436985d2a41f144881534620ba182ad8313"} Dec 11 14:18:28 crc kubenswrapper[4924]: I1211 14:18:28.768249 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerStarted","Data":"5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e"} Dec 11 14:18:30 crc kubenswrapper[4924]: I1211 14:18:30.782146 4924 generic.go:334] "Generic (PLEG): container finished" podID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerID="5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e" exitCode=0 Dec 11 14:18:30 crc kubenswrapper[4924]: I1211 14:18:30.799628 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerDied","Data":"5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e"} Dec 11 14:18:34 crc kubenswrapper[4924]: I1211 14:18:34.809161 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerStarted","Data":"41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd"} Dec 11 14:18:34 crc kubenswrapper[4924]: I1211 14:18:34.827931 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kklvp" podStartSLOduration=2.080989054 podStartE2EDuration="8.827907369s" podCreationTimestamp="2025-12-11 14:18:26 +0000 UTC" firstStartedPulling="2025-12-11 14:18:27.761747894 +0000 UTC m=+1521.271228871" lastFinishedPulling="2025-12-11 14:18:34.508666209 +0000 UTC m=+1528.018147186" observedRunningTime="2025-12-11 14:18:34.82430846 +0000 UTC m=+1528.333789437" watchObservedRunningTime="2025-12-11 14:18:34.827907369 +0000 UTC m=+1528.337388346" Dec 11 14:18:36 crc kubenswrapper[4924]: I1211 14:18:36.842422 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:36 crc kubenswrapper[4924]: I1211 14:18:36.843650 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:37 crc kubenswrapper[4924]: I1211 14:18:37.893059 4924 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kklvp" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="registry-server" probeResult="failure" output=< Dec 11 14:18:37 crc kubenswrapper[4924]: timeout: failed to connect service ":50051" within 1s Dec 11 14:18:37 crc kubenswrapper[4924]: > Dec 11 14:18:45 crc kubenswrapper[4924]: I1211 14:18:45.433412 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:18:45 crc kubenswrapper[4924]: I1211 14:18:45.434085 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:18:46 crc kubenswrapper[4924]: I1211 14:18:46.891650 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:46 crc kubenswrapper[4924]: I1211 14:18:46.931980 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:47 crc kubenswrapper[4924]: I1211 14:18:47.124439 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kklvp"] Dec 11 14:18:48 crc kubenswrapper[4924]: I1211 14:18:48.897711 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kklvp" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="registry-server" containerID="cri-o://41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd" gracePeriod=2 Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.241395 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.368457 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-catalog-content\") pod \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.368626 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp6lq\" (UniqueName: \"kubernetes.io/projected/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-kube-api-access-vp6lq\") pod \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.368689 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-utilities\") pod \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\" (UID: \"9f93176b-8460-4dfb-85e1-dfc5a010cdb2\") " Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.370086 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-utilities" (OuterVolumeSpecName: "utilities") pod "9f93176b-8460-4dfb-85e1-dfc5a010cdb2" (UID: "9f93176b-8460-4dfb-85e1-dfc5a010cdb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.374049 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-kube-api-access-vp6lq" (OuterVolumeSpecName: "kube-api-access-vp6lq") pod "9f93176b-8460-4dfb-85e1-dfc5a010cdb2" (UID: "9f93176b-8460-4dfb-85e1-dfc5a010cdb2"). InnerVolumeSpecName "kube-api-access-vp6lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.470055 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp6lq\" (UniqueName: \"kubernetes.io/projected/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-kube-api-access-vp6lq\") on node \"crc\" DevicePath \"\"" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.470390 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.481159 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f93176b-8460-4dfb-85e1-dfc5a010cdb2" (UID: "9f93176b-8460-4dfb-85e1-dfc5a010cdb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.571492 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f93176b-8460-4dfb-85e1-dfc5a010cdb2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.918445 4924 generic.go:334] "Generic (PLEG): container finished" podID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerID="41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd" exitCode=0 Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.918488 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerDied","Data":"41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd"} Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.918514 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kklvp" event={"ID":"9f93176b-8460-4dfb-85e1-dfc5a010cdb2","Type":"ContainerDied","Data":"2fc4cdd1f15da9a2af2c496f73a8d436985d2a41f144881534620ba182ad8313"} Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.918538 4924 scope.go:117] "RemoveContainer" containerID="41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.918695 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kklvp" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.938548 4924 scope.go:117] "RemoveContainer" containerID="5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.963503 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kklvp"] Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.969412 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kklvp"] Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.970116 4924 scope.go:117] "RemoveContainer" containerID="9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.989375 4924 scope.go:117] "RemoveContainer" containerID="41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd" Dec 11 14:18:49 crc kubenswrapper[4924]: E1211 14:18:49.990857 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd\": container with ID starting with 41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd not found: ID does not exist" containerID="41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.990906 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd"} err="failed to get container status \"41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd\": rpc error: code = NotFound desc = could not find container \"41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd\": container with ID starting with 41996d6d71839bc452a91f47343eb5c60997b81547834cae5265eb811dba1bcd not found: ID does not exist" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.990934 4924 scope.go:117] "RemoveContainer" containerID="5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e" Dec 11 14:18:49 crc kubenswrapper[4924]: E1211 14:18:49.991507 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e\": container with ID starting with 5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e not found: ID does not exist" containerID="5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.991546 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e"} err="failed to get container status \"5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e\": rpc error: code = NotFound desc = could not find container \"5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e\": container with ID starting with 5e9c2d71e23bc044b985f6cd5aa021a458676a504547924983b7574a8d4ba89e not found: ID does not exist" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.991565 4924 scope.go:117] "RemoveContainer" containerID="9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0" Dec 11 14:18:49 crc kubenswrapper[4924]: E1211 14:18:49.991889 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0\": container with ID starting with 9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0 not found: ID does not exist" containerID="9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0" Dec 11 14:18:49 crc kubenswrapper[4924]: I1211 14:18:49.991910 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0"} err="failed to get container status \"9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0\": rpc error: code = NotFound desc = could not find container \"9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0\": container with ID starting with 9b64927d19c056034d88d81b0d327861ba7fd38440f9a27636caa93c5fe74bb0 not found: ID does not exist" Dec 11 14:18:50 crc kubenswrapper[4924]: I1211 14:18:50.790378 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" path="/var/lib/kubelet/pods/9f93176b-8460-4dfb-85e1-dfc5a010cdb2/volumes" Dec 11 14:18:53 crc kubenswrapper[4924]: I1211 14:18:53.427768 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-n9z8x_563b2379-6f6c-4604-90e7-786d71191a32/control-plane-machine-set-operator/0.log" Dec 11 14:18:53 crc kubenswrapper[4924]: I1211 14:18:53.605339 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dd9cn_488b109d-f524-4b78-a1a9-d07a1178236d/machine-api-operator/0.log" Dec 11 14:18:53 crc kubenswrapper[4924]: I1211 14:18:53.616279 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dd9cn_488b109d-f524-4b78-a1a9-d07a1178236d/kube-rbac-proxy/0.log" Dec 11 14:19:04 crc kubenswrapper[4924]: I1211 14:19:04.448680 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-7q8fg_400979a3-71d8-499d-8c9d-087f3f50bd16/cert-manager-controller/0.log" Dec 11 14:19:04 crc kubenswrapper[4924]: I1211 14:19:04.591071 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-8kr7m_6500304b-4ffe-41bb-9e9a-ecf681b14e63/cert-manager-cainjector/0.log" Dec 11 14:19:04 crc kubenswrapper[4924]: I1211 14:19:04.599187 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-w9dnq_4b21fe71-2162-433e-8080-333688ba4bea/cert-manager-webhook/0.log" Dec 11 14:19:15 crc kubenswrapper[4924]: I1211 14:19:15.433040 4924 patch_prober.go:28] interesting pod/machine-config-daemon-rfwqf container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:19:15 crc kubenswrapper[4924]: I1211 14:19:15.433683 4924 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:19:15 crc kubenswrapper[4924]: I1211 14:19:15.433734 4924 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" Dec 11 14:19:15 crc kubenswrapper[4924]: I1211 14:19:15.434307 4924 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4"} pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:19:15 crc kubenswrapper[4924]: I1211 14:19:15.434408 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" containerName="machine-config-daemon" containerID="cri-o://bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" gracePeriod=600 Dec 11 14:19:15 crc kubenswrapper[4924]: E1211 14:19:15.575270 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:19:16 crc kubenswrapper[4924]: I1211 14:19:16.096098 4924 generic.go:334] "Generic (PLEG): container finished" podID="fafc4b5e-18de-4683-b008-775c510f12bf" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" exitCode=0 Dec 11 14:19:16 crc kubenswrapper[4924]: I1211 14:19:16.096198 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" event={"ID":"fafc4b5e-18de-4683-b008-775c510f12bf","Type":"ContainerDied","Data":"bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4"} Dec 11 14:19:16 crc kubenswrapper[4924]: I1211 14:19:16.096526 4924 scope.go:117] "RemoveContainer" containerID="5861f804b3b60124505c75e8ca85aada7f1b2041baaf2261006aa9edcedeb752" Dec 11 14:19:16 crc kubenswrapper[4924]: I1211 14:19:16.097221 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:19:16 crc kubenswrapper[4924]: E1211 14:19:16.097600 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:19:18 crc kubenswrapper[4924]: I1211 14:19:18.617226 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/util/0.log" Dec 11 14:19:18 crc kubenswrapper[4924]: I1211 14:19:18.768794 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/util/0.log" Dec 11 14:19:18 crc kubenswrapper[4924]: I1211 14:19:18.807573 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/pull/0.log" Dec 11 14:19:18 crc kubenswrapper[4924]: I1211 14:19:18.852139 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/pull/0.log" Dec 11 14:19:18 crc kubenswrapper[4924]: I1211 14:19:18.964601 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/extract/0.log" Dec 11 14:19:18 crc kubenswrapper[4924]: I1211 14:19:18.967097 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.005107 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a59zgs_432f37b8-3eac-4e9a-bc87-fa34be6e9fbd/pull/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.136264 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.282176 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/pull/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.283349 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.286146 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/pull/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.464315 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.469690 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/pull/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.489559 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210qbl5n_bbefe9ba-780c-43d7-9e6c-e0d204adb2f0/extract/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.613992 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.779051 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.790343 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/pull/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.806215 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/pull/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.980537 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/util/0.log" Dec 11 14:19:19 crc kubenswrapper[4924]: I1211 14:19:19.981648 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/pull/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.015115 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwvdtt_fe27eea4-41d6-4184-8c58-9d160407dd65/extract/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.155893 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/util/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.351511 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/util/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.362576 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/pull/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.384112 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/pull/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.529223 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/pull/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.535107 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/util/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.537775 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ef9lhl_c2fc2981-ac1f-431c-8ffd-c07b443211af/extract/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.673973 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/extract-utilities/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.821967 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/extract-utilities/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.858927 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/extract-content/0.log" Dec 11 14:19:20 crc kubenswrapper[4924]: I1211 14:19:20.888298 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/extract-content/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.038584 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/extract-content/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.048366 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/extract-utilities/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.197726 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w6xsm_1531a8b4-8fbe-46d1-b7fe-e4ef93f57224/registry-server/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.281587 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/extract-utilities/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.424904 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/extract-content/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.467110 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/extract-utilities/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.468416 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/extract-content/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.595102 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/extract-utilities/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.678287 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/extract-content/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.794968 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jwzqd_a3fcde33-0260-4abe-a246-3606d271519c/marketplace-operator/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.837407 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qg72j_532a71ee-9b39-4e46-9900-cdb1a1bea3ec/registry-server/0.log" Dec 11 14:19:21 crc kubenswrapper[4924]: I1211 14:19:21.922666 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/extract-utilities/0.log" Dec 11 14:19:22 crc kubenswrapper[4924]: I1211 14:19:22.038596 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/extract-utilities/0.log" Dec 11 14:19:22 crc kubenswrapper[4924]: I1211 14:19:22.049005 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/extract-content/0.log" Dec 11 14:19:22 crc kubenswrapper[4924]: I1211 14:19:22.063813 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/extract-content/0.log" Dec 11 14:19:22 crc kubenswrapper[4924]: I1211 14:19:22.197131 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/extract-utilities/0.log" Dec 11 14:19:22 crc kubenswrapper[4924]: I1211 14:19:22.212038 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/extract-content/0.log" Dec 11 14:19:22 crc kubenswrapper[4924]: I1211 14:19:22.439220 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4v7vf_9e52be9c-2b4a-4fff-bb8e-24d5725d6c41/registry-server/0.log" Dec 11 14:19:27 crc kubenswrapper[4924]: I1211 14:19:27.782755 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:19:27 crc kubenswrapper[4924]: E1211 14:19:27.783614 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:19:32 crc kubenswrapper[4924]: I1211 14:19:32.674255 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-fdkqr_29650e77-3c2e-45da-bac3-f26fe39e95d9/prometheus-operator/0.log" Dec 11 14:19:32 crc kubenswrapper[4924]: I1211 14:19:32.843734 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6567b776c7-t2gs6_847bf44c-6f92-49a3-8714-34558df6f0f7/prometheus-operator-admission-webhook/0.log" Dec 11 14:19:32 crc kubenswrapper[4924]: I1211 14:19:32.876800 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6567b776c7-tn75n_7015f850-c4bf-4212-b23d-4e14e2e8edb1/prometheus-operator-admission-webhook/0.log" Dec 11 14:19:32 crc kubenswrapper[4924]: I1211 14:19:32.999384 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-b4c8b_226608b6-d0d0-419d-aa80-788bfe423da4/operator/0.log" Dec 11 14:19:33 crc kubenswrapper[4924]: I1211 14:19:33.080870 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-hg49r_2b4ea8b1-e726-4f02-aa91-f97dc1122eab/perses-operator/0.log" Dec 11 14:19:38 crc kubenswrapper[4924]: I1211 14:19:38.783061 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:19:38 crc kubenswrapper[4924]: E1211 14:19:38.783917 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:19:51 crc kubenswrapper[4924]: I1211 14:19:51.783208 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:19:51 crc kubenswrapper[4924]: E1211 14:19:51.784037 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:19:57 crc kubenswrapper[4924]: I1211 14:19:57.983690 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rdstt"] Dec 11 14:19:57 crc kubenswrapper[4924]: E1211 14:19:57.985519 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="registry-server" Dec 11 14:19:57 crc kubenswrapper[4924]: I1211 14:19:57.985623 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="registry-server" Dec 11 14:19:57 crc kubenswrapper[4924]: E1211 14:19:57.985741 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="extract-utilities" Dec 11 14:19:57 crc kubenswrapper[4924]: I1211 14:19:57.985821 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="extract-utilities" Dec 11 14:19:57 crc kubenswrapper[4924]: E1211 14:19:57.985907 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="extract-content" Dec 11 14:19:57 crc kubenswrapper[4924]: I1211 14:19:57.986028 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="extract-content" Dec 11 14:19:57 crc kubenswrapper[4924]: I1211 14:19:57.986240 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f93176b-8460-4dfb-85e1-dfc5a010cdb2" containerName="registry-server" Dec 11 14:19:57 crc kubenswrapper[4924]: I1211 14:19:57.987397 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.001731 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdstt"] Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.130123 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfmn\" (UniqueName: \"kubernetes.io/projected/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-kube-api-access-pvfmn\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.130166 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-catalog-content\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.130224 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-utilities\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.231244 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-utilities\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.231386 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfmn\" (UniqueName: \"kubernetes.io/projected/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-kube-api-access-pvfmn\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.231420 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-catalog-content\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.232669 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-utilities\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.232728 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-catalog-content\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.257076 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfmn\" (UniqueName: \"kubernetes.io/projected/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-kube-api-access-pvfmn\") pod \"certified-operators-rdstt\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.315964 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:19:58 crc kubenswrapper[4924]: I1211 14:19:58.844816 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rdstt"] Dec 11 14:19:59 crc kubenswrapper[4924]: I1211 14:19:59.411041 4924 generic.go:334] "Generic (PLEG): container finished" podID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerID="ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b" exitCode=0 Dec 11 14:19:59 crc kubenswrapper[4924]: I1211 14:19:59.411114 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstt" event={"ID":"bfda27f8-e9b1-4705-91a1-8027c5ac09a5","Type":"ContainerDied","Data":"ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b"} Dec 11 14:19:59 crc kubenswrapper[4924]: I1211 14:19:59.411304 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstt" event={"ID":"bfda27f8-e9b1-4705-91a1-8027c5ac09a5","Type":"ContainerStarted","Data":"f18ec96c3dde8eb35c1670f91f5a3745921aa7cde8a09d74f24eefb2ae97a3d8"} Dec 11 14:19:59 crc kubenswrapper[4924]: I1211 14:19:59.412959 4924 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:20:01 crc kubenswrapper[4924]: I1211 14:20:01.425647 4924 generic.go:334] "Generic (PLEG): container finished" podID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerID="5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567" exitCode=0 Dec 11 14:20:01 crc kubenswrapper[4924]: I1211 14:20:01.425746 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstt" event={"ID":"bfda27f8-e9b1-4705-91a1-8027c5ac09a5","Type":"ContainerDied","Data":"5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567"} Dec 11 14:20:04 crc kubenswrapper[4924]: I1211 14:20:04.783060 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:20:04 crc kubenswrapper[4924]: E1211 14:20:04.783941 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:20:06 crc kubenswrapper[4924]: I1211 14:20:06.461904 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstt" event={"ID":"bfda27f8-e9b1-4705-91a1-8027c5ac09a5","Type":"ContainerStarted","Data":"5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52"} Dec 11 14:20:06 crc kubenswrapper[4924]: I1211 14:20:06.480764 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rdstt" podStartSLOduration=4.80298283 podStartE2EDuration="9.480748406s" podCreationTimestamp="2025-12-11 14:19:57 +0000 UTC" firstStartedPulling="2025-12-11 14:19:59.412626348 +0000 UTC m=+1612.922107325" lastFinishedPulling="2025-12-11 14:20:04.090391924 +0000 UTC m=+1617.599872901" observedRunningTime="2025-12-11 14:20:06.480060825 +0000 UTC m=+1619.989541802" watchObservedRunningTime="2025-12-11 14:20:06.480748406 +0000 UTC m=+1619.990229383" Dec 11 14:20:08 crc kubenswrapper[4924]: I1211 14:20:08.316451 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:20:08 crc kubenswrapper[4924]: I1211 14:20:08.316615 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:20:08 crc kubenswrapper[4924]: I1211 14:20:08.359040 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:20:18 crc kubenswrapper[4924]: I1211 14:20:18.363165 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:20:18 crc kubenswrapper[4924]: I1211 14:20:18.408941 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdstt"] Dec 11 14:20:18 crc kubenswrapper[4924]: I1211 14:20:18.557785 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rdstt" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="registry-server" containerID="cri-o://5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52" gracePeriod=2 Dec 11 14:20:18 crc kubenswrapper[4924]: I1211 14:20:18.783126 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:20:18 crc kubenswrapper[4924]: E1211 14:20:18.783660 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.403675 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.472747 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-utilities\") pod \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.472903 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-catalog-content\") pod \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.472947 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfmn\" (UniqueName: \"kubernetes.io/projected/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-kube-api-access-pvfmn\") pod \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\" (UID: \"bfda27f8-e9b1-4705-91a1-8027c5ac09a5\") " Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.473899 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-utilities" (OuterVolumeSpecName: "utilities") pod "bfda27f8-e9b1-4705-91a1-8027c5ac09a5" (UID: "bfda27f8-e9b1-4705-91a1-8027c5ac09a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.479773 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-kube-api-access-pvfmn" (OuterVolumeSpecName: "kube-api-access-pvfmn") pod "bfda27f8-e9b1-4705-91a1-8027c5ac09a5" (UID: "bfda27f8-e9b1-4705-91a1-8027c5ac09a5"). InnerVolumeSpecName "kube-api-access-pvfmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.527479 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfda27f8-e9b1-4705-91a1-8027c5ac09a5" (UID: "bfda27f8-e9b1-4705-91a1-8027c5ac09a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.568174 4924 generic.go:334] "Generic (PLEG): container finished" podID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerID="5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52" exitCode=0 Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.568222 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstt" event={"ID":"bfda27f8-e9b1-4705-91a1-8027c5ac09a5","Type":"ContainerDied","Data":"5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52"} Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.568271 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rdstt" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.568298 4924 scope.go:117] "RemoveContainer" containerID="5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.568284 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rdstt" event={"ID":"bfda27f8-e9b1-4705-91a1-8027c5ac09a5","Type":"ContainerDied","Data":"f18ec96c3dde8eb35c1670f91f5a3745921aa7cde8a09d74f24eefb2ae97a3d8"} Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.574274 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.574298 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.574309 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvfmn\" (UniqueName: \"kubernetes.io/projected/bfda27f8-e9b1-4705-91a1-8027c5ac09a5-kube-api-access-pvfmn\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.596527 4924 scope.go:117] "RemoveContainer" containerID="5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.601245 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rdstt"] Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.607052 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rdstt"] Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.618630 4924 scope.go:117] "RemoveContainer" containerID="ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.643278 4924 scope.go:117] "RemoveContainer" containerID="5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52" Dec 11 14:20:19 crc kubenswrapper[4924]: E1211 14:20:19.643972 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52\": container with ID starting with 5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52 not found: ID does not exist" containerID="5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.644017 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52"} err="failed to get container status \"5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52\": rpc error: code = NotFound desc = could not find container \"5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52\": container with ID starting with 5919a04b86cd369c78d5c2dbdaabadee8bc10e9cfbe8793cf7fc9d8056b13e52 not found: ID does not exist" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.644044 4924 scope.go:117] "RemoveContainer" containerID="5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567" Dec 11 14:20:19 crc kubenswrapper[4924]: E1211 14:20:19.644592 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567\": container with ID starting with 5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567 not found: ID does not exist" containerID="5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.644638 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567"} err="failed to get container status \"5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567\": rpc error: code = NotFound desc = could not find container \"5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567\": container with ID starting with 5598ed21c17489372c6cb69cdf1125619375ff568d4db8a412825e4150cd9567 not found: ID does not exist" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.644670 4924 scope.go:117] "RemoveContainer" containerID="ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b" Dec 11 14:20:19 crc kubenswrapper[4924]: E1211 14:20:19.645018 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b\": container with ID starting with ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b not found: ID does not exist" containerID="ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b" Dec 11 14:20:19 crc kubenswrapper[4924]: I1211 14:20:19.645064 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b"} err="failed to get container status \"ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b\": rpc error: code = NotFound desc = could not find container \"ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b\": container with ID starting with ca16d59bfb782fb2e17e08aebaf08bc8121aca24803043d7986f4c03ec3b926b not found: ID does not exist" Dec 11 14:20:20 crc kubenswrapper[4924]: I1211 14:20:20.791378 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" path="/var/lib/kubelet/pods/bfda27f8-e9b1-4705-91a1-8027c5ac09a5/volumes" Dec 11 14:20:23 crc kubenswrapper[4924]: I1211 14:20:23.600614 4924 generic.go:334] "Generic (PLEG): container finished" podID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerID="0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8" exitCode=0 Dec 11 14:20:23 crc kubenswrapper[4924]: I1211 14:20:23.600699 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jcv7d/must-gather-k858g" event={"ID":"417ce323-c2a5-4690-90ce-1b7b5a04dbc9","Type":"ContainerDied","Data":"0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8"} Dec 11 14:20:23 crc kubenswrapper[4924]: I1211 14:20:23.602056 4924 scope.go:117] "RemoveContainer" containerID="0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8" Dec 11 14:20:24 crc kubenswrapper[4924]: I1211 14:20:24.518044 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jcv7d_must-gather-k858g_417ce323-c2a5-4690-90ce-1b7b5a04dbc9/gather/0.log" Dec 11 14:20:29 crc kubenswrapper[4924]: I1211 14:20:29.783085 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:20:29 crc kubenswrapper[4924]: E1211 14:20:29.783658 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.195570 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jcv7d/must-gather-k858g"] Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.196135 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jcv7d/must-gather-k858g" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="copy" containerID="cri-o://93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b" gracePeriod=2 Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.203266 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jcv7d/must-gather-k858g"] Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.530319 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jcv7d_must-gather-k858g_417ce323-c2a5-4690-90ce-1b7b5a04dbc9/copy/0.log" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.531079 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.656534 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwbkv\" (UniqueName: \"kubernetes.io/projected/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-kube-api-access-fwbkv\") pod \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.656767 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-must-gather-output\") pod \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\" (UID: \"417ce323-c2a5-4690-90ce-1b7b5a04dbc9\") " Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.663744 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-kube-api-access-fwbkv" (OuterVolumeSpecName: "kube-api-access-fwbkv") pod "417ce323-c2a5-4690-90ce-1b7b5a04dbc9" (UID: "417ce323-c2a5-4690-90ce-1b7b5a04dbc9"). InnerVolumeSpecName "kube-api-access-fwbkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.669589 4924 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jcv7d_must-gather-k858g_417ce323-c2a5-4690-90ce-1b7b5a04dbc9/copy/0.log" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.672965 4924 generic.go:334] "Generic (PLEG): container finished" podID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerID="93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b" exitCode=143 Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.673048 4924 scope.go:117] "RemoveContainer" containerID="93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.673189 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jcv7d/must-gather-k858g" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.714730 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "417ce323-c2a5-4690-90ce-1b7b5a04dbc9" (UID: "417ce323-c2a5-4690-90ce-1b7b5a04dbc9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.890640 4924 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.890677 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwbkv\" (UniqueName: \"kubernetes.io/projected/417ce323-c2a5-4690-90ce-1b7b5a04dbc9-kube-api-access-fwbkv\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.917789 4924 scope.go:117] "RemoveContainer" containerID="0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.967617 4924 scope.go:117] "RemoveContainer" containerID="93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b" Dec 11 14:20:31 crc kubenswrapper[4924]: E1211 14:20:31.969070 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b\": container with ID starting with 93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b not found: ID does not exist" containerID="93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.969102 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b"} err="failed to get container status \"93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b\": rpc error: code = NotFound desc = could not find container \"93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b\": container with ID starting with 93052dab80baf7689e5a1f2b702293805744d88b0670bd4fb27067d93d92343b not found: ID does not exist" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.969166 4924 scope.go:117] "RemoveContainer" containerID="0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8" Dec 11 14:20:31 crc kubenswrapper[4924]: E1211 14:20:31.970121 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8\": container with ID starting with 0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8 not found: ID does not exist" containerID="0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8" Dec 11 14:20:31 crc kubenswrapper[4924]: I1211 14:20:31.970156 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8"} err="failed to get container status \"0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8\": rpc error: code = NotFound desc = could not find container \"0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8\": container with ID starting with 0d280c9fd8e8f3315d6c705e5094ddd2f4649851609a173213a546666e6021a8 not found: ID does not exist" Dec 11 14:20:32 crc kubenswrapper[4924]: I1211 14:20:32.791528 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" path="/var/lib/kubelet/pods/417ce323-c2a5-4690-90ce-1b7b5a04dbc9/volumes" Dec 11 14:20:41 crc kubenswrapper[4924]: I1211 14:20:41.783380 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:20:41 crc kubenswrapper[4924]: E1211 14:20:41.784343 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:20:53 crc kubenswrapper[4924]: I1211 14:20:53.783621 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:20:53 crc kubenswrapper[4924]: E1211 14:20:53.784382 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:21:06 crc kubenswrapper[4924]: I1211 14:21:06.790515 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:21:06 crc kubenswrapper[4924]: E1211 14:21:06.794095 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:21:19 crc kubenswrapper[4924]: I1211 14:21:19.783125 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:21:19 crc kubenswrapper[4924]: E1211 14:21:19.783878 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:21:32 crc kubenswrapper[4924]: I1211 14:21:32.783644 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:21:32 crc kubenswrapper[4924]: E1211 14:21:32.784463 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.877038 4924 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s4z8b"] Dec 11 14:21:40 crc kubenswrapper[4924]: E1211 14:21:40.877977 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="gather" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878003 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="gather" Dec 11 14:21:40 crc kubenswrapper[4924]: E1211 14:21:40.878024 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="extract-content" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878032 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="extract-content" Dec 11 14:21:40 crc kubenswrapper[4924]: E1211 14:21:40.878050 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="extract-utilities" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878058 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="extract-utilities" Dec 11 14:21:40 crc kubenswrapper[4924]: E1211 14:21:40.878080 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="registry-server" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878088 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="registry-server" Dec 11 14:21:40 crc kubenswrapper[4924]: E1211 14:21:40.878104 4924 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="copy" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878111 4924 state_mem.go:107] "Deleted CPUSet assignment" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="copy" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878245 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfda27f8-e9b1-4705-91a1-8027c5ac09a5" containerName="registry-server" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878259 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="gather" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.878275 4924 memory_manager.go:354] "RemoveStaleState removing state" podUID="417ce323-c2a5-4690-90ce-1b7b5a04dbc9" containerName="copy" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.881037 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:40 crc kubenswrapper[4924]: I1211 14:21:40.890283 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4z8b"] Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.044309 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnv9x\" (UniqueName: \"kubernetes.io/projected/5fd52a56-a20b-49ae-a624-0c269f9c871e-kube-api-access-mnv9x\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.044666 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-catalog-content\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.044718 4924 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-utilities\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.146555 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-catalog-content\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.146662 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-utilities\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.146710 4924 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnv9x\" (UniqueName: \"kubernetes.io/projected/5fd52a56-a20b-49ae-a624-0c269f9c871e-kube-api-access-mnv9x\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.147262 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-utilities\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.147262 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-catalog-content\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.173922 4924 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnv9x\" (UniqueName: \"kubernetes.io/projected/5fd52a56-a20b-49ae-a624-0c269f9c871e-kube-api-access-mnv9x\") pod \"community-operators-s4z8b\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.206643 4924 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:41 crc kubenswrapper[4924]: I1211 14:21:41.497832 4924 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s4z8b"] Dec 11 14:21:42 crc kubenswrapper[4924]: I1211 14:21:42.204249 4924 generic.go:334] "Generic (PLEG): container finished" podID="5fd52a56-a20b-49ae-a624-0c269f9c871e" containerID="c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111" exitCode=0 Dec 11 14:21:42 crc kubenswrapper[4924]: I1211 14:21:42.204360 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerDied","Data":"c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111"} Dec 11 14:21:42 crc kubenswrapper[4924]: I1211 14:21:42.204570 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerStarted","Data":"1b37799e277c1c8e8d1fe584a0ef3d1aa2575b050f64cd09c6c26badd81f2470"} Dec 11 14:21:44 crc kubenswrapper[4924]: I1211 14:21:44.220818 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerStarted","Data":"fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338"} Dec 11 14:21:45 crc kubenswrapper[4924]: I1211 14:21:45.233254 4924 generic.go:334] "Generic (PLEG): container finished" podID="5fd52a56-a20b-49ae-a624-0c269f9c871e" containerID="fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338" exitCode=0 Dec 11 14:21:45 crc kubenswrapper[4924]: I1211 14:21:45.233334 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerDied","Data":"fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338"} Dec 11 14:21:46 crc kubenswrapper[4924]: I1211 14:21:46.242081 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerStarted","Data":"513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4"} Dec 11 14:21:46 crc kubenswrapper[4924]: I1211 14:21:46.267926 4924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s4z8b" podStartSLOduration=2.629328997 podStartE2EDuration="6.267905909s" podCreationTimestamp="2025-12-11 14:21:40 +0000 UTC" firstStartedPulling="2025-12-11 14:21:42.206510434 +0000 UTC m=+1715.715991411" lastFinishedPulling="2025-12-11 14:21:45.845087346 +0000 UTC m=+1719.354568323" observedRunningTime="2025-12-11 14:21:46.259107655 +0000 UTC m=+1719.768588632" watchObservedRunningTime="2025-12-11 14:21:46.267905909 +0000 UTC m=+1719.777386896" Dec 11 14:21:46 crc kubenswrapper[4924]: I1211 14:21:46.788235 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:21:46 crc kubenswrapper[4924]: E1211 14:21:46.788548 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:21:51 crc kubenswrapper[4924]: I1211 14:21:51.207769 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:51 crc kubenswrapper[4924]: I1211 14:21:51.208125 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:51 crc kubenswrapper[4924]: I1211 14:21:51.246197 4924 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:51 crc kubenswrapper[4924]: I1211 14:21:51.312020 4924 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:51 crc kubenswrapper[4924]: I1211 14:21:51.478410 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4z8b"] Dec 11 14:21:53 crc kubenswrapper[4924]: I1211 14:21:53.287758 4924 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s4z8b" podUID="5fd52a56-a20b-49ae-a624-0c269f9c871e" containerName="registry-server" containerID="cri-o://513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4" gracePeriod=2 Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.187878 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.295608 4924 generic.go:334] "Generic (PLEG): container finished" podID="5fd52a56-a20b-49ae-a624-0c269f9c871e" containerID="513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4" exitCode=0 Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.295665 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerDied","Data":"513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4"} Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.295671 4924 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s4z8b" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.295695 4924 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s4z8b" event={"ID":"5fd52a56-a20b-49ae-a624-0c269f9c871e","Type":"ContainerDied","Data":"1b37799e277c1c8e8d1fe584a0ef3d1aa2575b050f64cd09c6c26badd81f2470"} Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.295718 4924 scope.go:117] "RemoveContainer" containerID="513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.313186 4924 scope.go:117] "RemoveContainer" containerID="fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.330469 4924 scope.go:117] "RemoveContainer" containerID="c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.336985 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnv9x\" (UniqueName: \"kubernetes.io/projected/5fd52a56-a20b-49ae-a624-0c269f9c871e-kube-api-access-mnv9x\") pod \"5fd52a56-a20b-49ae-a624-0c269f9c871e\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.337059 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-utilities\") pod \"5fd52a56-a20b-49ae-a624-0c269f9c871e\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.337154 4924 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-catalog-content\") pod \"5fd52a56-a20b-49ae-a624-0c269f9c871e\" (UID: \"5fd52a56-a20b-49ae-a624-0c269f9c871e\") " Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.338923 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-utilities" (OuterVolumeSpecName: "utilities") pod "5fd52a56-a20b-49ae-a624-0c269f9c871e" (UID: "5fd52a56-a20b-49ae-a624-0c269f9c871e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.344461 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd52a56-a20b-49ae-a624-0c269f9c871e-kube-api-access-mnv9x" (OuterVolumeSpecName: "kube-api-access-mnv9x") pod "5fd52a56-a20b-49ae-a624-0c269f9c871e" (UID: "5fd52a56-a20b-49ae-a624-0c269f9c871e"). InnerVolumeSpecName "kube-api-access-mnv9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.384517 4924 scope.go:117] "RemoveContainer" containerID="513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4" Dec 11 14:21:54 crc kubenswrapper[4924]: E1211 14:21:54.384986 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4\": container with ID starting with 513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4 not found: ID does not exist" containerID="513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.385042 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4"} err="failed to get container status \"513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4\": rpc error: code = NotFound desc = could not find container \"513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4\": container with ID starting with 513768215f397c72ff1b78282d1e0145edf8597b2bd28e5ea436fd131bd1fca4 not found: ID does not exist" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.385201 4924 scope.go:117] "RemoveContainer" containerID="fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338" Dec 11 14:21:54 crc kubenswrapper[4924]: E1211 14:21:54.385604 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338\": container with ID starting with fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338 not found: ID does not exist" containerID="fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.385627 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338"} err="failed to get container status \"fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338\": rpc error: code = NotFound desc = could not find container \"fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338\": container with ID starting with fab9bc82558d928377cb33b7992dfe695a1ad2e0275eacb20a93e079cbe14338 not found: ID does not exist" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.385691 4924 scope.go:117] "RemoveContainer" containerID="c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111" Dec 11 14:21:54 crc kubenswrapper[4924]: E1211 14:21:54.386095 4924 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111\": container with ID starting with c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111 not found: ID does not exist" containerID="c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.386122 4924 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111"} err="failed to get container status \"c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111\": rpc error: code = NotFound desc = could not find container \"c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111\": container with ID starting with c63266c58857cac16af51b594a2c061317b35fad3551d51af3a7f58e77cf5111 not found: ID does not exist" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.392377 4924 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fd52a56-a20b-49ae-a624-0c269f9c871e" (UID: "5fd52a56-a20b-49ae-a624-0c269f9c871e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.439016 4924 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnv9x\" (UniqueName: \"kubernetes.io/projected/5fd52a56-a20b-49ae-a624-0c269f9c871e-kube-api-access-mnv9x\") on node \"crc\" DevicePath \"\"" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.439053 4924 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.439063 4924 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fd52a56-a20b-49ae-a624-0c269f9c871e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.625710 4924 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s4z8b"] Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.630760 4924 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s4z8b"] Dec 11 14:21:54 crc kubenswrapper[4924]: I1211 14:21:54.790293 4924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd52a56-a20b-49ae-a624-0c269f9c871e" path="/var/lib/kubelet/pods/5fd52a56-a20b-49ae-a624-0c269f9c871e/volumes" Dec 11 14:21:58 crc kubenswrapper[4924]: I1211 14:21:58.783463 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:21:58 crc kubenswrapper[4924]: E1211 14:21:58.784158 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:22:13 crc kubenswrapper[4924]: I1211 14:22:13.783556 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:22:13 crc kubenswrapper[4924]: E1211 14:22:13.784258 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:22:24 crc kubenswrapper[4924]: I1211 14:22:24.783214 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:22:24 crc kubenswrapper[4924]: E1211 14:22:24.784033 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:22:37 crc kubenswrapper[4924]: I1211 14:22:37.783727 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:22:37 crc kubenswrapper[4924]: E1211 14:22:37.784298 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:22:52 crc kubenswrapper[4924]: I1211 14:22:52.783281 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:22:52 crc kubenswrapper[4924]: E1211 14:22:52.784173 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:23:03 crc kubenswrapper[4924]: I1211 14:23:03.782739 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:23:03 crc kubenswrapper[4924]: E1211 14:23:03.783607 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf" Dec 11 14:23:14 crc kubenswrapper[4924]: I1211 14:23:14.783670 4924 scope.go:117] "RemoveContainer" containerID="bafde93ff185b0b6255f08fe4c7d4d6b299b866ee1b31af2335044c80e2c11f4" Dec 11 14:23:14 crc kubenswrapper[4924]: E1211 14:23:14.784484 4924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rfwqf_openshift-machine-config-operator(fafc4b5e-18de-4683-b008-775c510f12bf)\"" pod="openshift-machine-config-operator/machine-config-daemon-rfwqf" podUID="fafc4b5e-18de-4683-b008-775c510f12bf"